Hi all, I got these errors and I found in other thread that the error is due to schema-registry pod.
In this message, my question is that why all the tables affected were updated successfully in Datahub even if I got the errors. Can anyone help?
The version is 0.9.6.1
{'total_records_written': 14084,
'records_written_per_second': 3,
'warnings': [],
'failures': [{'error': 'Unable to emit metadata to DataHub GMS',
'info': {'exceptionClass': 'com.linkedin.restli.server.RestLiServiceException',
'stackTrace': 'com.linkedin.restli.server.RestLiServiceException [HTTP Status:500]: '
'org.apache.kafka.common.errors.SerializationException: Error serializing Avro message\n'
'\tat com.linkedin.metadata.restli.RestliUtil.toTask(RestliUtil.java:42)\n'
'\tat com.linkedin.metadata.restli.RestliUtil.toTask(RestliUtil.java:50)',
'message': 'org.apache.kafka.common.errors.SerializationException: Error serializing Avro message',
'status': 500,
'id': 'urn:li:dataset:(urn:li:dataPlatform:hive,table1,PROD)'}},
{'error': 'Unable to emit metadata to DataHub GMS',
'info': {'exceptionClass': 'com.linkedin.restli.server.RestLiServiceException',
'stackTrace': 'com.linkedin.restli.server.RestLiServiceException [HTTP Status:500]: '
'org.apache.kafka.common.errors.SerializationException: Error serializing Avro message\n'
'\tat com.linkedin.metadata.restli.RestliUtil.toTask(RestliUtil.java:42)\n'
'\tat com.linkedin.metadata.restli.RestliUtil.toTask(RestliUtil.java:50)',
'message': 'org.apache.kafka.common.errors.SerializationException: Error serializing Avro message',
'status': 500,
'id': 'urn:li:dataset:(urn:li:dataPlatform:hive,table2,PROD)'}},
'... sampled of 20 total elements'],
'start_time': '2023-11-17 11:37:44.939247 (1 hour, 16 minutes and 44.12 seconds ago)',
'current_time': '2023-11-17 12:54:29.060362 (now)',
'total_duration_in_seconds': 4604.12,
'gms_version': 'null',
'pending_requests': 0}
Pipeline finished with at least 20 failures; produced 14104 events in 1 hour, 16 minutes and 43.84 seconds.```