Getting error of index

Hi Team,

We are currently facing an issue with the Yente and would appreciate your guidance.

Setup Details

  • We are using Docker Compose with:

    • Elasticsearch (docker.elastic.co/elasticsearch/elasticsearch:8.7.0)

    • Yente (ghcr.io/opensanctions/yente:latest)

  • Environment variable configured:

    • YENTE_INDEX_URL=http://index:9200
  • Elasticsearch is up and running correctly and accessible.

Issue

We are consistently getting the following error from Yente:

“Index yente-entities does not exist. This may be caused by a misconfiguration, or the initial ingestion of data is still ongoing.”

On checking Elasticsearch (_cat/indices), the yente-entities index is not present.

Questions

  1. Does Yente require a separate ingestion step to create/populate the yente-entities index, or was there any previous behavior where it auto-initialized?

  2. Has there been any recent change in Yente regarding index initialization or dependency on external ingestion?

  3. What is the recommended approach to ensure the index is created and kept up to date in a Docker-based deployment?

Any clarification or recommendations would be very helpful.

Thanks in advance!

Hi,

thanks for reaching out!

Could you share an excerpt of your yente logs so I can better diagnose the issue? Here are some thoughts on what might be going on:

  • you did not supply an OpenSanction delivery token, which is required when using OpenSanctions data. See OpenSanctions data - yente for more info.
  • yente is still reindexing – it takes a while for a full reindex, up to an hour depending on the performance of your machine.

By default, yente is configured to auto-reindex. This can be disabled using YENTE_AUTO_REINDEX, in which case you’ll have to set up your own yente reindex cronjob. Which one you should choose in production depends on your deployment – if you’re only running a single yente, the auto reindexing functionality is likely fine. If you’re running a larger deployment with multiple yentes, you’ll want a separate reindex job to make sure they don’t step on each other’s toes.

Hello,

Thanks for the quick response.

Please find below the recent logs from the Yente container:

[error] Indexing error: YenteIndexError('Could not index entities: 414 document(s) failed to index.') [yente.search.indexer] dataset=default index=yente-entities-default-00920260415065429-bex
Traceback (most recent call last):
  File "/app/yente/provider/elastic.py", line 220, in bulk_index
    await async_bulk(
  File "/venv/lib/python3.12/site-packages/elasticsearch/_async/helpers.py", line 337, in async_bulk
    async for ok, item in async_streaming_bulk(
  File "/venv/lib/python3.12/site-packages/elasticsearch/_async/helpers.py", line 252, in async_streaming_bulk
    async for data, (ok, info) in azip(  # type: ignore
  File "/venv/lib/python3.12/site-packages/elasticsearch/_async/helpers.py", line 156, in azip
    yield tuple([await x.__anext__() for x in aiters])
                 ^^^^^^^^^^^^^^^^^^^
  File "/venv/lib/python3.12/site-packages/elasticsearch/_async/helpers.py", line 127, in _process_bulk_chunk
    for item in gen:
  File "/venv/lib/python3.12/site-packages/elasticsearch/helpers/actions.py", line 274, in _process_bulk_chunk_success
    raise BulkIndexError(f"{len(errors)} document(s) failed to index.", errors)
elasticsearch.helpers.BulkIndexError: 414 document(s) failed to index.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/yente/search/indexer.py", line 147, in index_entities
    await provider.bulk_index(docs)
  File "/app/yente/provider/elastic.py", line 228, in bulk_index
    raise YenteIndexError(f"Could not index entities: {exc}") from exc
yente.exc.YenteIndexError: Could not index entities: 414 document(s) failed to index.





2026-04-15T12:36:02.063935Z [warning  ] Deleting partial index         [yente.search.indexer] index=yente-entities-default-00920260415065429-bex



2026-04-15T12:36:02.176185Z [info     ] Index update complete.         [yente.search.indexer] changed=False




2026-04-15T12:36:09.482794Z [info     ] /healthz                       [yente] action=request agent=curl/8.5.0 client_ip=127.0.0.1 code=200 method=GET path=/healthz query= referer=None took=0.0009293556213378906 trace_id=d8b55d67afa24cb79a6bf478c232d185

Thanks for sharing that error message. I’ve seen messages like this before - they are usually a knock-on effect of another error. Can you maybe:

  • Also share what version of yente you are using?
  • See if there is another error either in the app log before this one, or maybe an error message in the ElasticSearch (index) error logs?

Basically the message means: ES refused reading more OpenSanctions data. This could be because the data doesn’t have the same structure (very old yente), or because ES is generally not in a state to consume more data.

Hi,

Thanks for the guidance.

Here are the details you requested:

  1. Yente version: latest
    Package yente · GitHub

  2. Elasticsearch logs (relevant errors):
    {“@timestamp”:“2026-04-14T00:36:00.957Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam/tI4psraGTiK2CBLO0Z2Vfg] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T00:36:17.979Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam/tI4psraGTiK2CBLO0Z2Vfg] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T00:54:00.001Z”, “log.level”: “INFO”, “message”:“triggering scheduled [ML] maintenance tasks”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][generic][T#2]”,“log.logger”:“org.elasticsearch.xpack.ml.MlDailyMaintenanceService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T00:54:00.001Z”, “log.level”: “INFO”, “message”:“Deleting expired data”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][generic][T#2]”,“log.logger”:“org.elasticsearch.xpack.ml.action.TransportDeleteExpiredDataAction”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T00:54:00.003Z”, “log.level”: “INFO”, “message”:“Successfully deleted [0] unused stats documents”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][generic][T#1]”,“log.logger”:“org.elasticsearch.xpack.ml.job.retention.UnusedStatsRemover”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T00:54:00.004Z”, “log.level”: “INFO”, “message”:“Completed deletion of expired ML data”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][ml_utility][T#1]”,“log.logger”:“org.elasticsearch.xpack.ml.action.TransportDeleteExpiredDataAction”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T00:54:00.004Z”, “log.level”: “INFO”, “message”:“Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][ml_utility][T#1]”,“log.logger”:“org.elasticsearch.xpack.ml.MlDailyMaintenanceService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T01:30:00.000Z”, “log.level”: “INFO”, “message”:“starting SLM retention snapshot cleanup task”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][trigger_engine_scheduler][T#1]”,“log.logger”:“org.elasticsearch.xpack.slm.SnapshotRetentionTask”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T01:30:00.000Z”, “log.level”: “INFO”, “message”:“there are no repositories to fetch, SLM retention snapshot cleanup task complete”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][trigger_engine_scheduler][T#1]”,“log.logger”:“org.elasticsearch.xpack.slm.SnapshotRetentionTask”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T01:36:00.797Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T01:36:00.800Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T01:36:00.801Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260413185425-dam]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T01:36:00.837Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam/BgyLMhOPTRipR_isvMp6CA] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T01:36:18.615Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam/BgyLMhOPTRipR_isvMp6CA] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T02:36:00.825Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T02:36:00.828Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T02:36:00.829Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260413185425-dam]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T02:36:00.873Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam/hDbToRI2T7Ow2N6oq2X7jw] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T02:36:18.068Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260413185425-dam/hDbToRI2T7Ow2N6oq2X7jw] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T03:36:00.718Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T03:36:00.721Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T03:36:00.722Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260414005427-mys]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T03:36:00.770Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/ok4nEmcpSpegeAoPRqc2kw] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T03:36:17.823Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/ok4nEmcpSpegeAoPRqc2kw] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T04:14:09.669Z”, “log.level”: “WARN”, “message”:“this node is locked into cluster UUID [WeIoNvcHT_qrF3WBmbYqRA] but [cluster.initial_master_nodes] is set to [index]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see `` Important Elasticsearch configuration | Elasticsearch Guide [8.7] | Elastic “``, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][scheduler][T#1]”,“log.logger”:“org.elasticsearch.cluster.coordination.ClusterBootstrapService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {”@timestamp”:“2026-04-14T04:36:00.936Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T04:36:00.938Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T04:36:00.939Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260414005427-mys]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T04:36:00.987Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/RYb6NIiNQKWI6_SPlFXE-A] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T04:36:18.439Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/RYb6NIiNQKWI6_SPlFXE-A] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T05:36:00.746Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T05:36:00.748Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T05:36:00.749Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260414005427-mys]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T05:36:00.799Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/ZfZzHY67SGqEO83tWfRMMA] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T05:36:17.942Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/ZfZzHY67SGqEO83tWfRMMA] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T06:36:00.796Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T06:36:00.798Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T06:36:00.799Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260414005427-mys]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T06:36:00.848Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/IXrL3ZdyQZybGgOho1FV6w] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T06:36:17.991Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/IXrL3ZdyQZybGgOho1FV6w] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T07:36:00.816Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T07:36:00.819Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T07:36:00.820Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260414005427-mys]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T07:36:00.869Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/X6ixGI2TTAK6sKsfUIsekg] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T07:36:18.861Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/X6ixGI2TTAK6sKsfUIsekg] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T08:36:00.784Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T08:36:00.787Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys] creating index, cause [clone_index], templates , shards [1]/[1]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T08:36:00.787Z”, “log.level”: “INFO”, “message”:“updating number_of_replicas to [0] for indices [yente-entities-default-00920260414005427-mys]”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.routing.allocation.AllocationService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T08:36:00.837Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/MgG1_jmjQpaFM5jJ7EXXzA] create_mapping”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataMappingService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T08:36:18.046Z”, “log.level”: “INFO”, “message”:“[yente-entities-default-00920260414005427-mys/MgG1_jmjQpaFM5jJ7EXXzA] deleting index”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataDeleteIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}
    {“@timestamp”:“2026-04-14T09:36:00.938Z”, “log.level”: “INFO”, “message”:“applying create index request using existing index [yente-entities-default-00920260413065436-gga] metadata”, “ecs.version”: “1.2.0”,“service.name”:“ES_ECS”,“event.dataset”:“elasticsearch.server”,“process.thread.name”:“elasticsearch[index][masterService#updateTask][T#1]”,“log.logger”:“org.elasticsearch.cluster.metadata.MetadataCreateIndexService”,“elasticsearch.cluster.uuid”:“WeIoNvcHT_qrF3WBmbYqRA”,“elasticsearch.node.id”:“57VxAJ5eRjC6y__eD-Sg6Q”,“elasticsearch.node.name”:“index”,“elasticsearch.cluster.name”:“opensanctions-index”}

Additional context:

  • We are using a persistent volume for Elasticsearch data.

  • The setup was working earlier, but has been failing for the last month.

Thanks!

Hi Parth,

The BulkIndexError is the symptom — Elasticsearch a bulk insert, but the per-doc rejection reason isn’t visible in what you’ve shared yet. That reason is what we need. A few things to try, roughly in order of likelihood:

  1. Rule out a disk-watermark read-only block first. “Worked for ages, broken for the last month” on a persistent volume is a textbook ES flood-stage trip: once disk usage crosses ~95%, ES switches indices into read-only mode and every bulk write fails. Please share the output of:

curl -s http://index:9200/_cluster/health?pretty
curl -s http://index:9200/_cat/allocation?v
curl -s http://index:9200/_cat/indices?v

If you see any index with an index.blocks.read_only_allow_delete flag, free up space and then:

curl -XPUT http://index:9200/_all/_settings \
  -H 'Content-Type: application/json' \
  -d '{"index.blocks.read_only_allow_delete": null}'
  1. Share more of the yente application log. The snippet you posted is the tail of the failure — the lines leading up to it usually tell us which dataset and version yente was trying to index, whether the data fetch succeeded, and sometimes carry ES-side error fragments that didn’t fit in the short error message. Could you grab the full log from a yente startup or reindex run, ideally from the first “Indexing…” line all the way through the deletion of the partial index? Any warning-level lines from before the failure are particularly useful.

  2. Pin yente to a specific version. You’re on Package yente · GitHub. The :latest tag drifts, so a working deployment can break silently after an image repull. It would help us if you can run:

docker inspect ghcr.io/opensanctions/yente:latest | grep -i digest

and share the image digest. As a quick experiment, try pinning to a yente tag from 2–3 months ago — if indexing succeeds on the older tag, we know this is a regression in a recent yente and can narrow it down.

If you can post the _cluster/health + _cat/allocation output from step 1 and a wider chunk of log from step 2, that should let us pinpoint which of these it is.

Thanks!

  • Friedrich

Thanks for the detailed guidance — this was very helpful.

Here are the requested details:

  1. Cluster health:
   {
     "cluster_name": "opensanctions-index",
     "status": "yellow",
     "timed_out": false,
     "number_of_nodes": 1,
     "number_of_data_nodes": 1,
     "active_primary_shards": 9,
     "active_shards": 9,
     "relocating_shards": 0,
     "initializing_shards": 0,
     "unassigned_shards": 2,
     "delayed_unassigned_shards": 0,
     "number_of_pending_tasks": 0,
     "number_of_in_flight_fetch": 0,
     "task_max_waiting_in_queue_millis": 0,
     "active_shards_percent_as_number": 81.8181818181818
   }
  1. Allocation:
 shards disk.indices disk.used disk.avail disk.total disk.percent node
     9        2.5gb     9.8gb       15gb     24.9gb           39 index
     2                                                      UNASSIGNED
  1. Indices:
health status index                                        uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   read_me                                      xJWTfmqWQ_K0hJlyAPHq3w   1   1          1            0      5.8kb          5.8kb
green  open   yente-entities-default-00920260417065447-bhn qYXRMZYGTMeGcKDs3YqleA   1   0    4146759       139464      2.5gb          2.5gb
  1. Image digest:
gher.io/opensanctions/yente@sha256:fec4862f064397f50d8c19d606183b0f2860a8967028adf9087e237673a579f9

Additional observations:

  • We are using a persistent Elasticsearch volume.

  • The system was working earlier and started failing recently without any major config changes.

  • We are currently using the yente:latest image will try to pin to v5.0.2 version.

Please let us know if the issue points to disk watermark limits, mapping incompatibility, or a regression in the latest yente version.

Thanks again for your support.