Recommended Memory Allocation for Data Ingestion in v0.10.4 Deployment

Original Slack Thread

Hi team, we’re using v0.10.4. I’m running an ingestion and it keeps failing due to memory issues. I’ve already increased our memory resources to 1Gi for both limits and requests, up from 512/256Mi. What are the recommended memory allocations you recommend for deploying? I’ve attached the ingestion logs as well. Thanks.

memory allocation

  enabled: true
  image:
    repository: acryldata/datahub-actions
  resources:
    limits:
      cpu: 500m
      memory: 1Gi
    requests:
      cpu: 300m
      memory: 1Gi```
![attachment](https://files.slack.com/files-pri/TUMKD5EGJ-F06EPELKYA1/exec-urn_li_datahubexecutionrequest_982078b7-26d8-41d9-8f9b-8df0bda9cf28.log?t=xoxe-973659184562-6705490291811-6708051934148-dd1595bd5f63266bc09e6166373c7a3c)

AFAIK there’s no recommended settings besides the default. How much memory it actually needs will depend on what ingestion sources you use and on the metadata that it’s ingesting. In some cases i’ve seen it requiring several Gs of memory (e.g. 6Gi). I guess the recommendation is: as much memory as it needs to work (and hopefully there’s no memory leak).