Hi Team, I’m deploying datahub in openshift and used below page to deploy https://datahubproject.io/docs/deploy/kubernetes , as first step performing the pre-re helm install step
helm install prerequisites datahub/datahub-prerequisites --values <>
my values.yaml is below
Copy this file and update to the configuration of choice
elasticsearch:
enabled: true # set this to false, if you want to provide your own ES instance.
If you’re running in production, set this to 3 and comment out antiAffinity below
Or alternatively if you’re running production, bring your own ElasticSearch
replicas: 1
minimumMasterNodes: 1
Set replicas to 1 and uncomment this to allow the instance to be scheduled on
a master node when deploying on a single node Minikube / Kind / etc cluster.
antiAffinity: “soft”
initContainers:
- name: set-permissions
image: busybox
command: [“sh”, “-c”, “chown -R 1000:0 /usr/share/elasticsearch/config && chmod -R 777 /usr/share/elasticsearch/config”]
# If you are running a multi-replica cluster, comment this out
clusterHealthCheckParams: “wait_for_status=yellow&timeout=1s”
# Shrink default JVM heap.
esJavaOpts: “-Xmx512m -Xms512m -Xlog:gc*,gc+age=trace,safepoint:file=/tmp/gc.log:utctime,pid,tags:filecount=32,filesize=64m”
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: “2”
memory: “8Gi”
limits:
cpu: “2”
memory: “8Gi”
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [“ReadWriteOnce”]
storageClassName: “powerstore-sc”
resources:
requests:
storage: 50Gi
securityContext:
runAsNonRoot: false
runAsUser: 1000790000
runAsGroup: 1000790000
fsGroup: 1000790000
fsGroupChangePolicy: “Always”
Official neo4j chart, supports both community and enterprise editions
see https://neo4j.com/docs/operations-manual/current/kubernetes/ for more information
source: https://github.com/neo4j/helm-charts
neo4j:
enabled: true
nameOverride: neo4j
neo4j:
name: neo4j
edition: “community”
acceptLicenseAgreement: “yes”
defaultDatabase: “graph.db”
password: “datahub123”
# For better security, add password to neo4j-secrets k8s secret with neo4j-username neo4j-passwordn and NEO4J_AUTH and uncomment below
# NEO4J_AUTH: should be composed like so: {Username}/{Password}
# passwordFromSecret: neo4j-secrets
Set security context for pod
securityContext:
runAsNonRoot: false
runAsUser: 1000790000
runAsGroup: 1000790000
fsGroup: 1000790000
fsGroupChangePolicy: “Always”
Disallow privilegeEscalation on container level
containerSecurityContext:
allowPrivilegeEscalation: false
Create a volume for neo4j, SSD storage is recommended
volumes:
data:
mode: “volumeClaimTemplate”
volumeClaimTemplate:
accessModes: [“ReadWriteOnce”]
storageClassName: “powerstore-sc”
resources:
requests:
storage: 50Gi
env:
NEO4J_PLUGINS: ‘[“apoc”]’
mysql:
enabled: true
auth:
# For better security, add mysql-secrets k8s secret with mysql-root-password, mysql-replication-password and mysql-password
existingSecret: mysql-secrets
postgresql:
enabled: false
auth:
# For better security, add postgresql-secrets k8s secret with postgres-password, replication-password and password
existingSecret: postgresql-secrets
Using gcloud-proxy requires the node in a GKE cluster to have Cloud SQL Admin scope,
you will need to create a new node and migrate the workload if your current node does not have this scope
gcloud-sqlproxy:
enabled: false
Specify an existing secret holding the cloud-sql service account credentials, if not specify,
the default compute engine service account will be used and it needs to have Cloud SQL Client role
existingSecret: “”
The key in the existing secret that stores the credentials
existingSecretKey: “”
SQL connection settings
cloudsql:
# MySQL instances:
# update with your GCP project, the region of your Cloud SQL instance and the id of your Cloud SQL instance
# use port 3306 for MySQL, or other port you set for your SQL instance.
instances:
# GCP Cloud SQL instance id
- instance: “”
# GCP project where the instance exists.
project: “”
# GCP region where the instance exists.
region: “”
# Port number for the proxy to expose for this instance.
port: 3306
cp-helm-charts:
enabled: false
Schema registry is under the community license
cp-schema-registry:
enabled: false
kafka:
bootstrapServers: “prerequisites-kafka:9092” # <>-kafka:9092
cp-kafka:
enabled: false
cp-zookeeper:
enabled: false
cp-kafka-rest:
enabled: false
cp-kafka-connect:
enabled: false
cp-ksql-server:
enabled: false
cp-control-center:
enabled: false
Bitnami version of Kafka that deploys open source Kafka https://artifacthub.io/packages/helm/bitnami/kafka
kafka:
enabled: true
kraft:
enabled: false
zookeeper:
enabled: true
[root@sepcsah yamlfiles]#
after deployment , all of the pre-req pod are up except elasticsearch-master
[root@sepcsah yamlfiles]# oc get sts
NAME READY AGE
elasticsearch-master 0/1 133m
prerequisites-kafka 1/1 133m
prerequisites-mysql 1/1 133m
prerequisites-neo4j 1/1 133m
prerequisites-zookeeper 1/1 133m
[root@sepcsah yamlfiles]# oc get pods
NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 0/1 CrashLoopBackOff 5 (2m48s ago) 6m3s
prerequisites-kafka-0 1/1 Running 14 (89m ago) 133m
prerequisites-mysql-0 1/1 Running 18 (71m ago) 133m
prerequisites-neo4j-0 1/1 Running 0 86m
prerequisites-zookeeper-0 1/1 Running 0 88m
[root@sepcsah yamlfiles]# oc logs elasticsearch-master-0
Defaulted container “elasticsearch” out of: elasticsearch, configure-sysctl (init)
Exception in thread “main” java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file ‘logs/gc.log’: Permission denied
[0.000s][error][logging] Initialization of output ‘file=logs/gc.log’ using options ‘filecount=32,filesize=64m’ failed.
error:
Invalid -Xlog option ‘-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m’, see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.tools.launchers.JvmOption.flagsFinal(JvmOption.java:119)
at org.elasticsearch.tools.launchers.JvmOption.findFinalOptions(JvmOption.java:81)
at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:38)
at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:135)
at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:86)
[root@sepcsah yamlfiles]#
Does any one have idea on this issue, I have ample memory, not sure what causing this issue. Any leads shoul be helpful