We are running v10.4 and v10.5 in our different environments. We have recently seen that in the Analytics screen, the KPI metrics are no longer showing up. For e.g the weekly active users.
Is there some config which controls this behaviour or do these KPIs show up automatically ? If the latter, how can we troubleshoot whats wrong ?
<@U03BEML16LB> <@U01GCJKA8P9> <@U01GZEETMEZ>
Hi - i’m curious, the KPI does not show up after searching/browsing through entities? <@U03BEML16LB> could you confirm if this is an expected behavior?
hey Saral! so did this start happening after an upgrade? and you used to be able to see KPI metrics but you no longer can? also - by KPI metrics do you mean the metrics you screenshotted there or the usage stats like monthly active users etc?
I mean the monthly,weekly users.
yes it seems after 10.4 this problem started. I noticed the getAnalyticsCharts query is returning empty. Is it possible the datahub_usage index is corrupted or broken ?
hmm okay gotcha. and one more thing if you could - do you mind opening up your browser’s network tab and check out the graphql calls we make to get data for this page (you can see them when you first navigate to this Analytics page) that are called getAnalyticsCharts and getHighlights ? I’m curious if there’s any errors or anything that we’re not surfacing in the UI. same thing with in your GMS logs
gonna tag <@U04UKA5L5LK> as well in case she knows of anything that may have gone on on the indexing side of things for getting this usage info from our analytics service
2023-09-19 14:22:26,989 [Thread-11656] WARN org.elasticsearch.client.RestClient - request [POST <http://elasticsearch-master-hl:9200/datasetindex_v2/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true>] returned 1 warnings: [299 Elasticsearch-8.9.1-a813d015ef1826148d9d389bd1c0d781c6e349f0 "[ignore_throttled] parameter is deprecated because frozen indices have been deprecated. Consider cold or frozen tiers in place of frozen indices."] 2023-09-19 14:22:26,991 [Thread-11653] WARN org.elasticsearch.client.RestClient - request [POST <http://elasticsearch-master-hl:9200/chartindex_v2/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true>] returned 1 warnings: [299 Elasticsearch-8.9.1-a813d015ef1826148d9d389bd1c0d781c6e349f0 "[ignore_throttled] parameter is deprecated because frozen indices have been deprecated. Consider cold or frozen tiers in place of frozen indices."] 2023-09-19 14:22:27,002 [Thread-11662] WARN org.elasticsearch.client.RestClient - request [POST <http://elasticsearch-master-hl:9200/datasetindex_v2/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true>] returned 1 warnings: [299 Elasticsearch-8.9.1-a813d015ef1826148d9d389bd1c0d781c6e349f0 "[ignore_throttled] parameter is deprecated because frozen indices have been deprecated. Consider cold or frozen tiers in place of frozen indices."] 2023-09-19 14:22:27,131 [I/O dispatcher 1] ERROR c.l.m.s.e.update.BulkListener - Error feeding bulk request. No retries left. Request: Failed to perform bulk request: index [datahub_usage_event], optype: [CREATE], type [_doc], id [PageViewEvent_urn%3Ali%3Acorpuser%3Agky249_1695133346825_38667] java.io.IOException: Unable to parse response body for Response{requestLine=POST /_bulk?timeout=1m HTTP/1.1, host=<http://elasticsearch-master-hl:9200>, response=HTTP/1.1 200 OK} at org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1783) at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:636) at org.elasticsearch.client.RestClient$1.completed(RestClient.java:376) at org.elasticsearch.client.RestClient$1.completed(RestClient.java:370) at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122) at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.lang.NullPointerException: null at java.base/java.util.Objects.requireNonNull(Objects.java:221) at org.elasticsearch.action.DocWriteResponse.<init>(DocWriteResponse.java:127) at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:54) at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:39) at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:107) at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:104) at org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:159) at org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:188) at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1911) at org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAsyncAndParseEntity$10(RestHighLevelClient.java:1699) at org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1781) ... 18 common frames omitted
Would you be able to query ElasticSearch directly to see if the data is still present? I remember there was a bug with the usage indices getting truncated a few versions back. If that’s the case, you may want to restore them from snapshots.