Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hide GRIP docs since it is unmaintained #2365

Merged
merged 3 commits into from
Oct 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,7 @@
exclude_patterns = [
"docs/yearly-overview/2020-Game-Data.rst",
"docs/software/wpilib-tools/axon/**",
"docs/software/vision-processing/grip/**",
]

# Specify the master doc file, AKA our homepage
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Reading Array Values Published by NetworkTables
===============================================
This article describes how to read values published by :term:`NetworkTables` using a program running on the robot. This is useful when using computer vision where the images are processed on your driver station laptop and the results stored into NetworkTables possibly using a separate vision processor like a raspberry pi, or a tool on the robot like GRIP, or a python program to do the image processing.
This article describes how to read values published by :term:`NetworkTables` using a program running on the robot. This is useful when using computer vision where the images are processed on your driver station laptop and the results stored into NetworkTables possibly using a separate vision processor like a raspberry pi, or a tool on the robot like a python program to do the image processing.

Very often the values are for one or more areas of interest such as goals or game pieces and multiple instances are returned. In the example below, several x, y, width, height, and areas are returned by the image processor and the robot program can sort out which of the returned values are interesting through further processing.

Expand Down
1 change: 0 additions & 1 deletion source/docs/software/vision-processing/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,4 @@ Vision Processing
introduction/index
wpilibpi/index
apriltag/index
grip/index
roborio/index
Original file line number Diff line number Diff line change
Expand Up @@ -7,52 +7,3 @@ LabVIEW
-------

The 2017 LabVIEW Vision Example is included with the other LabVIEW examples. From the Splash screen, click Support->Find FRC\ |reg| Examples or from any other LabVIEW window, click Help->Find Examples and locate the Vision folder to find the 2017 Vision Example. The example images are bundled with the example.

C++/Java
--------

We have provided a GRIP project and the description below, as well as the example images, bundled into a ZIP that `can be found on TeamForge <https://usfirst.collab.net/sf/frs/do/viewRelease/projects.wpilib/frs.sample_programs.2017_c_java_vision_sample>`_.

See :ref:`docs/software/vision-processing/grip/using-generated-code-in-a-robot-program:Using Generated Code in a Robot Program` for details about integrating GRIP generated code in your robot program.

The code generated by the included GRIP project will find OpenCV contours for green particles in images like the ones included in the Vision Images folder of this ZIP. From there you may wish to further process these contours to assess if they are the target. To do this:

1. Use the boundingRect method to draw bounding rectangles around the contours
2. The LabVIEW example code calculates 5 separate ratios for the target. Each of these ratios should nominally equal 1.0. To do this, it sorts the contours by size, then starting with the largest, calculates these values for every possible pair of contours that may be the target, and stops if it finds a target or returns the best pair it found.

In the formulas below, each letter refers to a coordinate of the bounding rect (H = Height, L = Left, T = Top, B = Bottom, W = Width) and the numeric subscript refers to the contour number (1 is the largest contour, 2 is the second largest, etc).

- Top height should be 40% of total height (4 in / 10 in):

.. math:: \textit{Group Height} = \frac{H_1}{0.4 (B_2 - T_1)}

- Top of bottom stripe to top of top stripe should be 60% of total height (6 in / 10 in):

.. math:: \textit{dTop} = \frac{T_2 - T_1}{0.6 (B_2 - T_1)}

- The distance between the left edge of contour 1 and the left edge of contour 2 should be small relative to the width of the 1st contour; then we add 1 to make the ratio centered on 1:

.. math:: \textit{LEdge} = \frac{L_1 - L_2}{W_1} + 1

- The widths of both contours should be about the same:

.. math:: \textit{Width ratio} = \frac{W_1}{W_2}

- The larger stripe should be twice as tall as the smaller one

.. math:: \textit{Height ratio} = \frac{H_1}{2 H_2}

Each of these ratios is then turned into a 0-100 score by calculating:

.. math:: 100 - (100 \cdot \mathrm{abs}(1 - \textit{Val}))

3. To determine distance, measure pixels from top of top bounding box to bottom of bottom bounding box:

.. math:: \textit{distance} = \frac{\textit{Target height in ft.} (10/12) \cdot \textit{YRes}}{2 \cdot \textit{PixelHeight} \cdot \tan (\textit{viewAngle of camera})}

The LabVIEW example uses height as the edges of the round target are the most prone to noise in detection (as the angle points further from the camera the color looks less green). The downside of this is that the pixel height of the target in the image is affected by perspective distortion from the angle of the camera. Possible fixes include:

- Try using width instead
- Empirically measure height at various distances and create a lookup table or regression function
- Mount the camera to a servo, center the target vertically in the image and use servo angle for distance calculation (you'll have to work out the proper trig yourself or find a math teacher to help!)
- Correct for the perspective distortion using OpenCV. To do this you will need to `calibrate your camera with OpenCV <https://docs.opencv.org/3.4.6/d4/d94/tutorial_camera_calibration.html>`_. This will result in a distortion matrix and camera matrix. You will take these two matrices and use them with the undistortPoints function to map the points you want to measure for the distance calculation to the "correct" coordinates (this is much less CPU intensive than undistorting the whole image)
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,6 @@ Identifying and Processing the Targets

Once an image is captured, the next step is to identify Vision Target(s) in the image. This document will walk through one approach to identifying the 2016 targets. Note that the images used in this section were taken with the camera intentionally set to underexpose the images, producing very dark images with the exception of the lit targets, see the section on Camera Settings for details.

Additional Options
------------------

This document walks through the approach used by the example code provided in LabVIEW (for PC or roboRIO), C++ and Java. In addition to these options teams should be aware of the following alternatives that allow for vision processing on the Driver Station PC or an on-board PC:

1. `RoboRealm <http://www.roborealm.com/>`_
2. SmartDashboard Camera Extension (programmed in Java, works with any robot language)
3. `GRIP <https://wpiroboticsprojects.github.io/GRIP/>`_

Original Image
--------------

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ Vision Code on roboRIO
.. image:: diagrams/vision-code-on-roborio.drawio.svg
:alt: The chain from a USB Webcam to roboRIO to Ethernet Switch over a video stream to the driver station computer.

Vision code can be embedded into the main robot program on the roboRIO. Building and running the vision code is straightforward because it is built and deployed along with the robot program. The vision code can be written by hand or generated by GRIP in either C++ or Java. The disadvantage of this approach is that having vision code running on the same processor as the robot program can cause performance issues. This is something you will have to evaluate depending on the requirements for your robot and vision program.
Vision code can be embedded into the main robot program on the roboRIO. Building and running the vision code is straightforward because it is built and deployed along with the robot program. The vision code can be written in C++, Java, or Python. The disadvantage of this approach is that having vision code running on the same processor as the robot program can cause performance issues. This is something you will have to evaluate depending on the requirements for your robot and vision program.

In this approach, the vision code simply produces results that the robot code directly uses. Be careful about synchronization issues when writing robot code that is getting values from a vision thread. The GRIP generated code and the VisionRunner class in WPILib make this easier.
In this approach, the vision code simply produces results that the robot code directly uses. Be careful about synchronization issues when writing robot code that is getting values from a vision thread. The VisionRunner class in WPILib make this easier.

Using functions provided by the CameraServer class, the video stream can be sent to dashboards such as Shuffleboard so operators can see what the camera sees. In addition, annotations can be added to the images using OpenCV commands so targets or other interesting objects can be identified in the dashboard view.

Expand All @@ -33,11 +33,11 @@ Vision Code on DS Computer
.. image:: diagrams/vision-code-on-ds-computer.drawio.svg
:alt: Same as the above diagram but the Driver Station computer must process that video and send NetworkTables updates back to the roboRIO.

When vision code is running on the DS computer, the video is streamed back to the Driver Station laptop for processing. Even the older Classmate laptops are substantially faster at vision processing than the roboRIO. GRIP can be run on the Driver Station laptop directly with the results sent back to the robot using NetworkTables. Alternatively you can write your own vision program using a language of your choosing. Python makes a good choice since there is a native NetworkTables implementation and the OpenCV bindings are very good.
When vision code is running on the DS computer, the video is streamed back to the Driver Station laptop for processing. Even the older Classmate laptops are substantially faster at vision processing than the roboRIO. You can write your own vision program using a language of your choosing. Python makes a good choice since there is a native NetworkTables implementation and the OpenCV bindings are very good.

After the images are processed, the key values such as the target position, distance or anything else you need can be sent back to the robot with NetworkTables. This approach generally has higher latency, as delay is added due to the images needing to be sent to the laptop. Bandwidth limitations also limit the maximum resolution and FPS of the images used for processing.

The video stream can be displayed on Shuffleboard or in GRIP.
The video stream can be displayed on Shuffleboard or SmartDashboard.

Vision Code on Coprocessor
--------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,9 @@ This method involves streaming the camera to the Driver Station so that the driv
Processing
^^^^^^^^^^

Instead of only streaming the camera to the Driver Station, this method involves using the frames captured by the camera to compute information, such as a game piece's or target's angle and distance from the camera. This method requires more technical knowledge and time in order to implement, as well as being more computationally expensive. However, this method can help improve autonomous performance and assist in "auto-scoring" operations during the teleoperated period. This method can be done using the roboRIO or a coprocessor such as the Raspberry Pi using either OpenCV or programs such as GRIP.
Instead of only streaming the camera to the Driver Station, this method involves using the frames captured by the camera to compute information, such as a game piece's or target's angle and distance from the camera. This method requires more technical knowledge and time in order to implement, as well as being more computationally expensive. However, this method can help improve autonomous performance and assist in "auto-scoring" operations during the teleoperated period. This method can be done using the roboRIO or a coprocessor such as the Raspberry Pi using OpenCV.

- :ref:`Vision Processing with Raspberry Pi <docs/software/vision-processing/wpilibpi/index:Vision with WPILibPi>`
- :ref:`Vision Processing with GRIP <docs/software/vision-processing/grip/index:Vision with GRIP>`
- :ref:`Vision Processing with the roboRIO <docs/software/vision-processing/roborio/using-the-cameraserver-on-the-roborio:Advanced Camera Server Program>`

For additional information on the pros and cons of using a coprocessor for vision processing, see the next page, :ref:`docs/software/vision-processing/introduction/strategies-for-vision-programming:Strategies for Vision Programming`.