Releases: roboflow/inference
v0.22.0
🚀 Added
🔥 YOLOv11 in inference
🔥
We’re excited to announce that YOLOv11 has been added to inference
! 🚀 You can now use both inference
and the inference
server to get predictions from the latest YOLOv11 model. 🔥
All thanks to @probicheaux and @SolomonLake 🏅
skateboard_yolov11.mov
Try the model ininference
Python package
import cv2
from inference import get_model
image = cv2.imread("<your-image>")
model = get_model("yolov11n-640")
predictions = model.infer(image)
print(predictions)
💪 Workflows update
Google Vision OCR in workflows
Thanks to open source contribution from @brunopicinin we have Google Vision OCR integrated into workflow ecosystem. Great to see open source community contribution 🏅
google_vision_ocr.mp4
See 📖 documentation of the new block to explore it's capabilities.
Images stitch Workflow block
📷 Your camera is not able to cover the whole area you want to observe? Don't worry! @grzegorz-roboflow just added the Workflow block which would be able to combine the POV of multiple cameras into a single image that can be further processed in your Workflow.
image 1 | image 2 | stitched image |
---|---|---|
📏 Size measurement block
Thanks to @chandlersupple, we can now measure actual size of objects with Workflows! Take a look at 📖 documentation to discover how the block works.
Workflows profiler and Execution Engine speedup 🏇
We've added Workflows Profiler - ecosystem extension to profile the execution of your workflow. It works for inference
server requests (both self-hosted and on Roboflow platform) as well as for InferencePipeline
.
The cool thing about profiler is that it is compatible with chrome://tracing
- so you can easily grab profiler output and render it in Google Chrome browser.
To profile your Workflow execution use the following code snippet - traces are saved in ./inference_profiling
directory by default.
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(
api_url="https://detect.roboflow.com",
api_key="<YOUR-API-KEY>"
)
results = client.run_workflow(
workspace_name="<your-workspace>",
workflow_id="<your-workflow-id>",
images={
"image": "<YOUR-IMAGE>",
},
enable_profiling=True,
)
See detailed report regarding speed optimisations in the PR #710
❗ Important note
As part of speed optimisation we enabled server-side caching for workflows definitions saved on Roboflow Platform - if you frequently change and your Workflow, to see results immediately you need to specify use_cache=False
parameter of client.run_workflow(...)
method
🔧 Fixed
- Fix prometheus scraping by @robiscoding in #712
- Fix the problem with VLMs on batch inference by @PawelPeczek-Roboflow in #718
🌱 Changed
- Docker auto-reload configuration by @EmilyGavrilenko in #703
- Multi-Label Classification UQL Operations by @EmilyGavrilenko in #714
- Add port forward to Notebook Landing Page message by @hansent in #711
- Add optional descriptions to dynamic blocks by @EmilyGavrilenko in #702
- Add descriptions to task types in
VLM as Detector
by @PawelPeczek-Roboflow in #704 - Add tests for Google Vision OCR by @PawelPeczek-Roboflow in #715
- Improvements regarding custom python blocks by @PawelPeczek-Roboflow in #716
🏅 New Contributors
We do want to honor @brunopicinin who made their first contribution to inference
in #709 as a part of Hacktoberfest 2024. We invite other open-source community members to contribute 😄
Full Changelog: v0.21.1...v0.22.0
v0.21.1
What's Changed
- Improvements in contribution guides by @PawelPeczek-Roboflow in #691
- Fix issue with SAM2 producing segmentation masks as points and lines by @PawelPeczek-Roboflow in #697
Full Changelog: v0.21.0...v0.21.1
v0.21.0
🚀 Added
👩🎨 Become an artist with Workflows 👨🎨
Ever wanted to be an artist but felt like you lacked the skills? No worries! We’ve just added the StabilityAI Inpainting block to the Workflows ecosystem. Now, you can effortlessly add whatever you envision into your images! 🌟🖼️
Credits to @Fafruch for origin idea 💪
inpainting_stability_ai_demo.mp4
🤯 Workflows + video + inference
server - Experimental feature preview 🔬
Imagine creating a Workflow in our UI, tuning it to understand what happens in your video. So far, video processing with InferencePipeline
required a bit of glue code and setup in your environment. We hope that soon, you won’t need any custom scripts for video processing! You’ll be able to ship your Workflow directly to the inference
server, just pointing which video source to process.
We're thrilled to announce that we’ve taken the first step toward making this idea a reality! Check out our experimental feature for video processing, now controlled by the inference server with a user-friendly REST API for easy integration.
video_processing_behind_api.mp4
🔍 We encourage you to try it out! The feature is available in the inference server Docker images that you can self-host. Please note that this feature is experimental, and breaking changes are to be expected. Check out our 📖 docs to learn more.
🙃 Flips, Rotations and Resizing in Workflows
Tired of dealing with image orientation problems while building demos with Workflows? Whether it's resizing, rotating, or flipping, those headaches end today with our new features for seamless image adjustments!
All thanks to @EmilyGavrilenko and PR #683
✨ Ensure Your Tracked Objects Stay on Course! 🛰️
Wondering if the objects you're tracking follow the path you intended? We’ve got you covered in Workflows! Thanks to @shantanubala, we now offer Fréchet Distance Analysis as a Workflow block. Simply specify the desired trajectory, and Workflow calculates the deviation for each tracked box. 📊
See details: #682
What’s Fréchet Distance?
It’s a mathematical measure that compares the similarity between two curves—perfect for analyzing how closely your tracked objects follow the path you’ve set.
🆕 Background removal in Dynamic Crop and updated UI for VLMs
Let’s be honest—VLMs in Workflows still had room for improvement, especially when integrating model outputs with other blocks. Well, we've made it better! 🎉 Now, each model task comes with a clear description and a suggested parser to follow the block, helping you get the most out of your model predictions with ease. 🛠️
Additionally, you can now remove background while performing Dynamic Crop on Instance Segmentation model results 🤯
CleanShot.2024-09-26.at.18.38.33.mp4
🔧 Fixed
- Fix lora by @probicheaux in #676
- Add ability to parametrise class remap UQL operation by @PawelPeczek-Roboflow in #674
- Fix license link in docs by @NickHerrig in #680
- Add
transformers
extra to dev setup py by @PawelPeczek-Roboflow in #678
🌱 Changed
gstreamer
backend ininference
by @PawelPeczek-Roboflow in #646- Add workflow schema api docs by @NickHerrig in #677
- Add Florence-2 📖 documentation by @capjamesg in #626
🏅 New Contributors
- @shantanubala made their first contribution in #682
Full Changelog: v0.20.1...v0.21.0
v0.20.1
What's Changed
- Fix workflows gallery previews by @PawelPeczek-Roboflow in #668
- Add option to reset timer when detection falls outside of zone by @grzegorz-roboflow in #671
- Line counter improvements by @EmilyGavrilenko in #670
Full Changelog: v0.20.0...v0.20.1
v0.20.0
🚀 Added
🌟 Florence 2 🤝 Workflows
Thanks to @probicheaux, the Workflows ecosystem just got better with the addition of the Florence 2 block. Florence 2, one of the top open-source releases this year, is a powerful Visual Language Model capable of tasks like object detection, segmentation, image captioning, OCR, and more. Now, you can use it directly in your workflows!
Florence 2 and SAM 2 - zero shot grounded segmentation
Ever wished for precise segmentation but didn’t have the data to train your model? Now you don’t need it! With Florence 2 and SAM 2, you can achieve stunning segmentation results effortlessly — without a single annotation.
Discover how to combine these powerful models and get top-tier segmentation quality for free!
florence2_and_sam2.mp4
Florence 2 as OCR model
Need Text Layout Detection and OCR? Florence 2 Has You Covered!
florence2_with_ocr.mp4
Zero-shot object detection needed?
Do not hesitate and try out Florence 2 as object detection model - the quality of results is surprisingly good 🔥
florence2_object_detection.mp4
🔔 Additional notes
- Florence 2 requires either Roboflow Dedicated Deployment or self-hosted
inference
server - it is not available on Roboflow Hosted Platform - To discover full potential of Florence 2 - read the paper
- Visit 📖 documentation of Florence 2 Workflow block
New version of SIFT block
Tired using SIFT descriptors calculation block followed by SIFT comparison? This is no longer needed. Check out SIFT Comparison v2 block. PR: #657
Workflows UQL extended with new operations
You may not even be aware, but Universal Query Language powers Workflows operations that can be fully customised in UI. There are two new features shipped:
- selecting prediction by confidence by @EmilyGavrilenko in #655
- class names remapping operation by @NickHerrig in #656
Instance Segmentation ⏩ oriented rectangle
Thanks to @chandlersupple, Instance Segmentation results can be turned into oriented bounding boxes - check out 📖 docs
🔧 Fixed
- Broken links removed from docs in #663
- Fixes to release
0.19.0
:- broken visualisation blocks got repaired: by @grzegorz-roboflow in #662
- proper defaults brought into line counter and time in zone analytics blocks by @grzegorz-roboflow in #658
🌱 Changed
- Control what properties are visible in the UI by default by @EmilyGavrilenko in #659
- E2E tests for describe workflow and examples by @PawelPeczek-Roboflow in #652
- Add secrets to integration tests by @PawelPeczek-Roboflow in #667
Full Changelog: v0.19.0...v0.20.0
v0.19.0
🚀 Added
🎥 Video processing in workflows
🤯
We’re excited to announce that, thanks to the contributions of @grzegorz-roboflow, our Workflows ecosystem now extends to video processing! Dive in and explore the new possibilities:
dwell_time_demo.mp4
New blocks:
- Time In Zone to analyse dwell time
- Line Counter to detect objects passing line
- Visualisation for zone and line counter
We've introduced minimal support for video processing in the Workflows UI, with plans to expand to more advanced features soon. To get started, you can create a Python script using the InferencePipeline, similar to the provided example.
Video source YT | Karol Majek
🔥 OWLv2 🤝 inference
Thanks to @probicheaux we have OWLv2 model in inference
. OWLv2 was primarily trained to detect objects from text. The implementation in Inference currently only supports detecting objects from visual examples of that object.
You can use model in inference
server - both CPU and GPU, as well as in Python package. Visit our 📖 docs to learn more.
Screen.Recording.2024-09-19.at.21.36.13.mov
👓 TROCR 🤝 inference
@stellasphere shipped TROCR model to expand OCR models offering in inference
🔥
You can use model in inference
server - both CPU and GPU, as well as in Python package. Visit our 📖 docs to learn more.
🧑🎓 Workflows - endpoint to discover interface
Guessing the data format for Workflow inputs and outputs was a challange as for now, but thanks to @EmilyGavrilenko this is no longer the case. We offer two new endpoints (for workflows registered on the platform and for workflows submitted in payload). Details in #644.
🔔 Example response
{
"inputs": {
"image": ["image"],
"model_id": ["roboflow_model_id"],
},
"outputs": {
"detections": ["object_detection_prediction"],
"crops": ["image"],
"classification": {
"inference_id": ["string"],
"predictions": ["classification_prediction"],
},
},
"typing_hints": {
"image": "dict",
"roboflow_model_id": "str",
"object_detection_prediction": "dict",
"string": "str",
"classification_prediction": "dict",
},
"kinds_schemas": {
"image": {},
"object_detection_prediction": {"dict": "with OpenAPI 3.0 schema of result"},
"classification_prediction": {"dict": "with OpenAPI 3.0 schema of result"}
}
}
🌱 Changed
- Bring back transformers extras by @probicheaux in #639
- Move xformers to paligemma code by @probicheaux in #641
- Cache plan details by @grzegorz-roboflow in #636
- Bump next from 14.2.7 to 14.2.12 in /inference/landing by @dependabot in #649
🔧 Fixed
- Fixed bug with Workflows Execution Engine causing bug when conditional execution discards inputs of a step that changes dimensionality - see details in #645
♻️ Removed
- Remove unmaintained device management code by @robiscoding in #647
Full Changelog: v0.18.1...v0.19.0
v0.18.1
🔨 Fixed
New VLM as Classifier Workflows
block had bug - multi-label classification results were generated with "class_name" instead of "class" field in prediction details: #637
🌱 Changed
- Increase timeout to 30 minutes for .github/workflows/test_package_install_inference_with_extras.yml by @grzegorz-roboflow in #635
Full Changelog: v0.18.0...v0.18.1
v0.18.0
🚀 Added
💪 New VLMs in Workflows
We've shipped blocks to integrate with Google Gemini and Anthropic Claude, but that's not everything! OpenAI block got updated. New "VLM Interface" of the block assumes that it can be prompted using pre-configured options and model output can be processed by set of formatter blocs to achieve desired end. It is now possible to:
- use
classification
prompting in VLM block and applyVLM as Classifier
block to turn output string into classification result and process further using other blocks from ecosystem - the same can be achieved for
object-detection
prompting andVLM as Detector
block, which converts text produced by model intosv.Detections(...)
From now one, VLMs are much easier to integrate.
🧑🦱 USE CASE: PII protection when prompting VLM
Detect faces first, apply blur prediction visualisation and ask VLMs to tell what is the person eye colour - they won't be able to tell 🙃
👨🎨 USE CASE: VLM as object detection model
👓 USE CASE: VLM as secondary classifier
Turn VLM output into classification results and process using downstream blocks - here we ask Gemini to classify crops of dogs to tell what is dog's breed - then we extract top class as property.
🤯 Workflows
previews in documentation 📖
Thanks to @joaomarcoscrs we can embed Workflows into documentation pages. Just take a look how amazing it is ❗
🌱 Changed
- E2E tests for
workflows
on hosted platform by @PawelPeczek-Roboflow in #622 - Allow model type "yolov8" without size by @SolomonLake in #627
- Fix
GDAL
issue in GHA by @PawelPeczek-Roboflow in #628 - Add support for DELETE in sqlite wrapper by @grzegorz-roboflow in #631
- Keep distinct exec sessions for inf pipeline usage by @grzegorz-roboflow in #632
- Insert usage payloads into redis and to sorted set atomically by @grzegorz-roboflow in #633
- remove height from workflow example by @capjamesg in #634
❗ BREAKING ❗ Batch[X]
kinds removed from Workflows
What was changed and why?
In inference
release 0.18.0
we decided to make drastic move to heal the ecosystem from the problem with ambiguous kinds names (Batch[X]
vs X
- see more here).
The change is breaking only for non-Roboflow Workflow plugins depending on imports from inference.core.workflows.execution_engine.entities.types
module. To the best of our knowledge, there is no such plugin.
The change is not breaking in terms of running Workflows on Roboflow platform and on-prem given that external plugins were not used.
Migration guide
Migration should be relatively easy - in the code of a Workflow block, all instances of
from inference.core.workflows.execution_engine.entities.types import BATCH_OF_{{KIND_NAME}}
should be replaced with
from inference.core.workflows.execution_engine.entities.types import {{KIND_NAME}}
PR with changes as reference: #618
Full Changelog: v0.17.1...v0.18.0
v0.17.1
❗IMPORTANT ❗Security issue in opencv-python
This PR provides fix for the following security issue:
opencv-python versions before v4.8.1.78 bundled libwebp binaries in wheels that are vulnerable to GHSA-j7hp-h8jx-5ppr. opencv-python v4.8.1.78 upgrades the bundled libwebp binary to v1.3.2.
We advise all clients using inference
to migrate, especially in production environments.
Full Changelog: v0.17.0...v0.17.1
v0.17.0
🚀 Added
💪 More Classical Computer Vision blocks in workflows
Good news for the fans of classical computer vision!
We heard you – and we’ve added a bunch of new blocks to enhance your workflows.
Basic operations on images
Workflow Definition | Preview |
---|---|
Camera focus check
Workflow Definition | Preview |
---|---|
🚀 Upgrade of CLIP Comparison
and Roboflow Dataset Upload
blocks
We’ve made it even more versatile. The new outputs allow seamless integration with many other blocks, enabling powerful workflows like:
detection → crop → CLIP classification (on crops) → detection class replacement
Get ready to streamline your processes with enhanced compatibility and new possibilities!
For Roboflow Dataset Upload @ v2
there is now possibility to sample percentage of data to upload and we changed the default sizes of saved images to be bigger.
❗ Do not worry! All your old Workflows using mentioned blocks are not affected with the change thanks to versioning 😄
💥 New version of 📖 Workflow docs 🔥
The Wait is Over – Our Workflows Documentation is Finally Here!
We’ve revamped and expanded the documentation to make your experience smoother. It’s now organized into three clear sections:
- General Overview: Perfect for getting you up and running quickly.
- Mid-Level User Guide: Gain a solid understanding of the ecosystem without diving too deep into the technical details.
- Detailed Developer Guide: Designed for contributors, packed with everything you need to develop within the ecosystem.
Check it out and let us know what you think of the new docs!
🌱 Changed
- Record resource details in each usage payload by @grzegorz-roboflow in #607
- json.dumps resource_details when adding to usage payload by @grzegorz-roboflow in #610
- enhancement in CLIP docs by @venkatram-dev in #599 - thanks for contribution 🏅
- Sam2 multi polygons by @probicheaux in #593
- Abstract sqlite3 wrapper in usage collector sqlite queue by @grzegorz-roboflow in #619
🔨 Fixed
- Dynamic Crop block was buggy in some contexts - see details: #604
- Bug in integration tests by @PawelPeczek-Roboflow in #600
- Bugfix sam2 logits cache and add test by @probicheaux in #606
- Fix bug with detections offset and cover problem with additional test by @PawelPeczek-Roboflow in #611
- only warn about version when its lower than latest release by @hansent in #609
- Add inference_ids to model blocks by @robiscoding in #615
- Fix Detection Offset Bug by @NickHerrig in #621
- Bump micromatch from 4.0.5 to 4.0.8 in /inference/landing by @dependabot in #617
🏅 New Contributors
- @reedajohns made their first contribution in #602
- @venkatram-dev made their first contribution in #599
Full Changelog: v0.16.3...v0.17.0