Challenges with Datahub Installation and Elasticsearch Component Management in Helm Deployment

Original Slack Thread

Hello Community!
We have been struggling with our Datahub installation especially around the ELasticsearch component. We have deployed using the helm method.
We have had issue in the past year about severe slowness and UI not being updated and we got the suggestion to spin up our own ES cluster from bitnami, which we did. The problem with slowness was resolved.
However, we face problems whenever we had to reinstall ES as it involves deleting the PVCs holding the index data. Once all components come back, we run the restore indices cronjob and thats where the problem is. Job completes fine but UI is super slow to update and the updates hang/stop in between and then again resume after hours. We have wasted full days trying to get back our metadata.

I wanted to understand from the community whoever is using helm installation, how have you setup your components and what steps do you take for reindexing metadata, provided the metadata storage DB data is never wiped. How is your ES setup, what kind of resources you have allocated to GMS,frontend, ES, kafka etc. Are all your components within the same k8s cluster?
Hope I get some responses :slightly_smiling_face:

<@U03MF8MU5P0> might be able to speak to this!

For production instances, I would suggest using managed Elasticsearch/Opensearch such as AWS’s|service.