-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(vector source protobuf codec disk buffering) Build prost with no-recursion-limit feature #19413
base: master
Are you sure you want to change the base?
Conversation
@@ -203,7 +203,7 @@ rmp-serde = { version = "1.1.2", default-features = false, optional = true } | |||
rmpv = { version = "1.0.1", default-features = false, features = ["with-serde"], optional = true } | |||
|
|||
# Prost / Protocol Buffers | |||
prost = { version = "0.12", default-features = false, features = ["std"] } | |||
prost = { version = "0.12", default-features = false, features = ["std", "no-recursion-limit"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, I am not a fan of unbounded defaults. See discussions:
- https://github.com/rustsec/advisory-db/blob/main/crates/prost/RUSTSEC-2020-0002.md
- Stack overflow when parsing message tokio-rs/prost#267
- Rust code segfaults with stack overflow rust-lang/rust#79935
A better alternative here would be to reach out to prost maintainers and ask them to expose RECURSION_LIMIT (100 seems a bit low) as a configuration option. And then we can re-expose it in our codec config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, infinite recursion could lead to runaway recursion bugs. I'd also feel better about exposing a configurable option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been looking over the code paths used, to map out what all is in play. It is not enough to get prost to expose RECURSION_LIMIT, or more likely to create a new constructor function for DecodeContext
that takes a recurse_count
field parameter. The protobuf
codec uses DynamicMessage
to do decoding, which then also would have to be extended to take a client-provided DecodeContext
- the general idea being that Vector would create a DecodeContext
with the desired recurse_count
.
However, this only handles sources/sinks using the protobuf
codec directly. The bigger issue is that the Vector-native source/sink do not (directly) use prost at all. They operate through tonic, using compile-time stubs generated from the proto-file. All byte-stream decoding calls simply call those autogenerated stubs to decode. There are no prost calls or anywhere to potentially modify or extend to use a client-provided DecodeContext
.
I can't tell what/where/how the disk buffering encoding/decoding ultimately ties into prost, to figure out that angle.
Further, a configurable recursion limit knob in the protobuf codec and in the Vector source configs would actually have two different meanings, because the proto-generated Vector-native Event
(or EventMessage
) stubs currently have a 3-layer per decoded field recursion (Event -> Kind -> Value I believe?). This is effectively why the prost limit of 100 layers equates to a client-visible JSON recursion limit of 32/33 layers. Either documentation, or a multiplier, would have to be performed and remain in lockstep with the proto-definition of the Vector-native Event
message type.
I would also tend to think runaway recursion would have to first get past the max message size limits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there is some movement on making the limit configurable (at build time): tokio-rs/prost#785
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I missed that that was a year ago 😭 Maybe someone will want to pick up that torch though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also tend to think runaway recursion would have to first get past the max message size limits?
I think it's pretty trivial to create a nested struct that is well below the message limit but with significant recursion.
I see what you are saying about the difficulties of configuring it at runtime if that was possible. I think I'd be ok with seeing a build time increase to the limit (say to 1000) but that will still require prost
support first.
Related to issue #19315. The
prost
protobuf crate used bytokio
andserde
has a default decoding recursion limit of 100. When used by the Vector-native source, protobuf codec, and/or any sink with disk buffering, this leads to an effective field nesting depth limit for event payloads of 32. Payloads with fields nested deeper than 32-deep fail to decode, causing the event to fail to be received / read from disk buffer / etc. Depending on the location (the Vector-native source, disk buffering, etc), various unrecoverable failure-to-read errors can occur.Enabling the
no-recursion-limit
feature in theprost
crate removes this limitation.