Troubleshooting BigQuery Ingestion Failure with 'Failed to produce MCLs' Error

Hi <@U05ED3WJ21Y>, in our case increasing the message size limit on the kafka broker fixed the issue:
https://datahubspace.slack.com/archives/C029A3M079U/p1695828888940509?thread_ts=1695775295.200279&amp;cid=C029A3M079U

Regarding the RecordTooLargeException we saw these only in the info logs extracted from the container (not in the ingestion logs):
https://datahubproject.io/docs/how/extract-container-logs/

Thanks for the quick reply! I’m deploying to k8s and was already talking about the logs from the gms container. However, I must have overlooked that one line with the RecordTooLargeException, as I can now see it in the logs :man-facepalming:

So apparently even the increased 5MB limit is not sufficient anymore for our case :unamused: I might try compression now, as David proposed, but I also hope that this will not be a recurring theme. I didn’t take a look at the actual messages, but 5MB is already way more than recommended (and needed, in most cases). I wonder if there are any plans about batching these messages and then using smaller, but more messages …