With the prerequisites broker you’ll only have the default metrics available, but you can at minimum check how the pod is looking from a cpu and memory perspective. Also worth looking at the logs to see if anything looks off.
It’s also worth noting that generally the prerequisites stuff is there to get a test instance up and for production load it’s recommended to use a managed service for Kafka unless your team is just really experienced with managing the external services in production.
I haven’t found anything abnormal in the logs, and the pods looks okay from a cpu and memory perspective. It is curious how the pipeline works until it gets to this one Explore: urn:li:dataPlatform:looker,shiphero.explore.fulfillment_members,PROD. Or is this just a red herring?
Separate from the ingestion issue, without making any changes to the cluster, I’m now having issues using Google Sign-in:
Retryable PersistenceException: Error when batch flush on sql: insert into metadata_aspect_v2 (urn, aspect, version, metadata, createdOn, createdBy, createdFor, systemmetadata) values (?,?,?,?,?,?,?,?)
Have you seen this error before <@UV5UEC3LN>?