You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When write failure occur, connect task handles it by backoff sleep and retry until writes recovers. In my case, when facing infrastructure problems and writing to HDFS, write HDFS pipeline is pinned to try through same nodes. When infrastructure takes long to recover, connect task gains delay.
I would like to be able to set an upper bound for retries in which case operation is reset and temp file recreated. This will initiate new write pipeline and have a chance to complete write via different HDFS nodes.
Currently, this scenario is working effectively when using e.g. WALL time based partitioner, where triggered rotation will very likely trigger and error while attempting to close open file and initiate a reset. With record based time partitioner which does not work because "time do not move"
The text was updated successfully, but these errors were encountered:
JozoVilcek
pushed a commit
to JozoVilcek/kafka-connect-hdfs
that referenced
this issue
Jun 13, 2023
When write failure occur, connect task handles it by backoff sleep and retry until writes recovers. In my case, when facing infrastructure problems and writing to HDFS, write HDFS pipeline is pinned to try through same nodes. When infrastructure takes long to recover, connect task gains delay.
I would like to be able to set an upper bound for retries in which case operation is reset and temp file recreated. This will initiate new write pipeline and have a chance to complete write via different HDFS nodes.
Currently, this scenario is working effectively when using e.g. WALL time based partitioner, where triggered rotation will very likely trigger and error while attempting to close open file and initiate a reset. With record based time partitioner which does not work because "time do not move"
The text was updated successfully, but these errors were encountered: