Troubleshooting Datahub Backup and Ingestion Issues with Docker Deployment

Original Slack Thread

Hi guys, I’m having some issues with datahub backup. I had a domain Application with a Data Product. After taking a backup the domain is able to see on the homepage, but checking in domain page it is not shown on that page. How can I make it visible on the domain page?

Another is the secret on the ingestion page, I had a secret created previously, but after taking a backup it is not visible in the UI, but when I try to recreate it, there is an error saying the secret already existed. How can I make the secret visible after taking the backup?

I deployed using docker, RDS and running backup by using datahub-upgrade with -u RestoreIndicesattachmentattachmentattachment

Hello <@U05F857J7NE> ,
The restore indices job takes your entire database and create MAEs to be consumed by the MAE Consumer. So, MAE consumer will consume all the events in batch mode to update Elasticsearch indices. If you have a considerable volume of entities in your database, the MAE consumer can take longer time to update everything and this time is affected by the ‘flush period’ parameter, ‘topic partitions’ and ‘replication factor’. What I can recommend is to wait some hours after restore indices job finishes to test if everything is updated in your front-end.