Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combining Listen task with Vision task #9

Open
JensVanhooydonck opened this issue Jun 29, 2021 · 5 comments
Open

Combining Listen task with Vision task #9

JensVanhooydonck opened this issue Jun 29, 2021 · 5 comments

Comments

@JensVanhooydonck
Copy link

Is it possible to combine the listen task with a vision task?
When i add the vision task in a thread it throws an error that the vision task can only be used in the main thread.

A listen task can't be added in a thread.

@daan-schepers
Copy link

No, it is not possible to run these two together.

There are some ways to for example get data externally and run a vision task, but it depends on the application.
In what kind of situation do you need these to run simultaneously?

@JensVanhooydonck
Copy link
Author

I want to use the camera to do some basic localisation of QR-codes. Use this as input to move the arm itself.
For now i have created a listener which will send an Exit command. After the listener, we then take one Vision job and then start the listener job again. This is pretty slow..

Is there a way to access the Video-feed on another way? I don't really need the vision task itself, i actually need the video-feed in my program. But i couldn't find anything about accessing the video feed directly.

@daan-schepers
Copy link

For now, there is (as far as i know) no way to get live video feed without the vision-node. It is possible to send images with the built-in image_talker node in this Techman package. But you can only send an image every 200ms or so.

I don't know if will help you, but you can use the built-in 'visual servoing' in TMvision (software manual TMvision chapter 3.2.1) to move and center the robotarm according to its video feed. if you want to use your own application to scan the QRcode, you can center with this vision job and add a second one after to send image data to your external detection system.

@MatthijsBurgh
Copy link

@daan-schepers @JensVanhooydonck

Do I understand it correctly it is not possible to have the tm_driver and the image_talker running in parallel?

As this would require both a program running with the listen node and a program with a vision task. Is this possible?

Is it possible to combine these two into one program with different subflows and/or threads?

@MatthijsBurgh
Copy link

MatthijsBurgh commented Aug 9, 2023

I have created a program with a listen node. The fail path is connected to stop. The pass path is connected to the vision job to send an image. Both paths of the vision job are connected to an "go to", to go back to the listen node.

To get an image, I send "ScriptExit(1)" to go to the vision job.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants