Skip to content

StreamFailureError when calling .query() on a DDL on cluster with comment at the beginning of the query #499

@lpelecq

Description

@lpelecq

Describe the bug

The combination of calling the .query() method to run a CREATE TABLE on a cluster while having a comment at the beginning of a query produces a StreamFailureError.
The table is created but the python call fail with a StreamFailureError exception.
Doing any of the following change prevents the Python exception :

  • Calling .command() instead of .query()
  • Removing the comment
  • Removing the ON CLUSTER … from the query (this changes the behavior of the query obviously)

Steps to reproduce

  1. Setup a Clickhouse cluster (I tested both 2 shards 2 replicas and 1 shard 2 replicas)
  2. Call the problematic query

Expected behaviour

There should be no error running the DDL query using the query method.

Code example

from clickhouse_connect import get_client
ch = get_client(...)

ch.query(
    """--sql
    CREATE OR REPLACE TABLE my_table ON CLUSTER cluster1 (
      a String,
      b String
    )
    ENGINE = MergeTree
    ORDER BY tuple()
    """
)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/lpelecq/[…]/venv/lib/python3.9/site-packages/clickhouse_connect/driver/client.py", line 237, in query
    return self._query_with_context(query_context)
  File "/home/lpelecq/[…]/venv/lib/python3.9/site-packages/clickhouse_connect/driver/httpclient.py", line 252, in _query_with_context
    query_result = self._transform.parse_response(byte_source, context)
  File "/home/lpelecq/[…]/venv/lib/python3.9/site-packages/clickhouse_connect/driver/transform.py", line 63, in parse_response
    first_block = get_block()
  File "/home/lpelecq/[…]/venv/lib/python3.9/site-packages/clickhouse_connect/driver/transform.py", line 58, in get_block
    raise StreamFailureError(extract_error_message(source.last_message)) from None
clickhouse_connect.driver.exceptions.StreamFailureError: clickhouse-node1	9000	0		1	0
clickhouse-node2	9000	0		0	0

clickhouse-connect and/or ClickHouse server logs

Node 1

2025.06.04 15:29:49.102385 [ 86 ] {} <Trace> HTTPHandler-factory: HTTP Request for HTTPHandler-factory. Method: POST, Address: 192.168.128.65:35242, User-Agent: clickhouse-connect/0.8.17 (lv:py/3.9.2; mode:sync; os:linux; os_user:lpelecq), Length: 165, Content Type: text/plain; charset=utf-8, Transfer Encoding: identity, X-Forwarded-For: (none)
2025.06.04 15:29:49.102895 [ 86 ] {} <Trace> DynamicQueryHandler: Request URI: /?date_time_input_format=best_effort&cast_string_to_dynamic_use_inference=1&session_id=707b9edb-2ddb-40b6-8ee2-170813de620c&wait_end_of_query=1&send_progress_in_http_headers=1&http_headers_progress_interval_ms=120000&client_protocol_version=54405&enable_http_compression=1
2025.06.04 15:29:49.103030 [ 86 ] {} <Debug> HTTP-Session-51d278db-0785-45f4-b7e5-d09b98dfdd71: Authenticating user 'default' from 192.168.128.65:35242
2025.06.04 15:29:49.103112 [ 86 ] {} <Debug> HTTP-Session-51d278db-0785-45f4-b7e5-d09b98dfdd71: 51d278db-0785-45f4-b7e5-d09b98dfdd71 Authenticated with global context as user 94309d50-4f52-5250-31bd-74fecac179db
2025.06.04 15:29:49.103242 [ 86 ] {} <Debug> HTTP-Session-51d278db-0785-45f4-b7e5-d09b98dfdd71: Creating named session context with name: 707b9edb-2ddb-40b6-8ee2-170813de620c, user_id: 94309d50-4f52-5250-31bd-74fecac179db
2025.06.04 15:29:49.103348 [ 86 ] {} <Trace> NamedSessionsStorage: Create new session with session_id: 707b9edb-2ddb-40b6-8ee2-170813de620c, user_id: 94309d50-4f52-5250-31bd-74fecac179db
2025.06.04 15:29:49.103563 [ 86 ] {} <Trace> HTTP-Session-51d278db-0785-45f4-b7e5-d09b98dfdd71: 51d278db-0785-45f4-b7e5-d09b98dfdd71 Creating query context from session context, user_id: 94309d50-4f52-5250-31bd-74fecac179db, parent context user: default
2025.06.04 15:29:49.103723 [ 86 ] {} <Trace> TemporaryFileOnLocalDisk: Creating temporary file 'tmp21778e82-46fe-449e-8669-59d3a7d3c478'
2025.06.04 15:29:49.104758 [ 86 ] {90011af9-a67d-42d3-81d1-4afcaf28c774} <Debug> executeQuery: (from 192.168.128.65:35242) --sql
 CREATE OR REPLACE TABLE my_table ON CLUSTER cluster1 ( a String, b String ) ENGINE = MergeTree ORDER BY tuple() FORMAT Native (stage: Complete)
2025.06.04 15:29:49.124435 [ 756 ] {} <Debug> DDLWorker: Cleaning queue
2025.06.04 15:29:49.133990 [ 751 ] {} <Debug> DDLWorker: Scheduling tasks
2025.06.04 15:29:49.136167 [ 751 ] {} <Trace> DDLWorker: scheduleTasks: initialized=true, size_before_filtering=12, queue_size=12, entries=query-0000000000..query-0000000011, first_failed_task_name=none, current_tasks_size=1, last_current_task=query-0000000010, last_skipped_entry_name=none
2025.06.04 15:29:49.136278 [ 751 ] {} <Debug> DDLWorker: Will schedule 1 tasks starting from query-0000000011
2025.06.04 15:29:49.136313 [ 751 ] {} <Trace> DDLWorker: Checking task query-0000000011
2025.06.04 15:29:49.139759 [ 751 ] {} <Debug> DDLWorker: Processing task query-0000000011 (query: CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple(), backup restore: false)
2025.06.04 15:29:49.141679 [ 751 ] {} <Debug> DDLWorker: Executing query: CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple()
2025.06.04 15:29:49.141978 [ 751 ] {00dfb797-4d74-4c8a-9920-ced31fe827a9} <Debug> executeQuery: (from 0.0.0.0:0, user: , initial_query_id: 90011af9-a67d-42d3-81d1-4afcaf28c774) /* ddl_entry=query-0000000011 */ CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple() (stage: Complete)
2025.06.04 15:29:49.142422 [ 751 ] {00dfb797-4d74-4c8a-9920-ced31fe827a9} <Trace> IInterpreterUnionOrSelectQuery: The new analyzer is enabled, but the old interpreter is used. It can be a bug, please report it. Will disable 'allow_experimental_analyzer' setting (for query: SELECT count() SETTINGS aggregate_functions_null_for_empty = false, transform_null_in = false, legacy_column_name_of_tuple_literal = false)
2025.06.04 15:29:49.145638 [ 751 ] {00dfb797-4d74-4c8a-9920-ced31fe827a9} <Debug> default._tmp_replace_17c18f4e95824e5c_jhijhbms7d7a2837 (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Loading data parts
2025.06.04 15:29:49.146050 [ 751 ] {00dfb797-4d74-4c8a-9920-ced31fe827a9} <Debug> default._tmp_replace_17c18f4e95824e5c_jhijhbms7d7a2837 (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Loaded data parts (0 items) took 0.000410549 seconds
2025.06.04 15:29:49.150182 [ 331 ] {} <Trace> default._tmp_replace_17c18f4e95824e5c_jhijhbms7d7a2837 (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Didn't start merge: There are no parts that can be merged. (Collector returned empty ranges set)
2025.06.04 15:29:49.150921 [ 751 ] {00dfb797-4d74-4c8a-9920-ced31fe827a9} <Debug> DDLWorker: Executed query: /* ddl_entry=query-0000000011 */ CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple()
2025.06.04 15:29:49.152776 [ 751 ] {} <Debug> DDLWorker: Waiting for queue updates
2025.06.04 15:29:49.190161 [ 86 ] {90011af9-a67d-42d3-81d1-4afcaf28c774} <Trace> TemporaryFileOnLocalDisk: Removing temporary file 'tmp21778e82-46fe-449e-8669-59d3a7d3c478'
2025.06.04 15:29:49.190848 [ 86 ] {90011af9-a67d-42d3-81d1-4afcaf28c774} <Debug> executeQuery: Read 2 rows, 120.00 B in 0.086496 sec., 23.122456529781722 rows/sec., 1.35 KiB/sec.
2025.06.04 15:29:49.190981 [ 86 ] {90011af9-a67d-42d3-81d1-4afcaf28c774} <Debug> DynamicQueryHandler: Done processing query
2025.06.04 15:29:49.191037 [ 86 ] {90011af9-a67d-42d3-81d1-4afcaf28c774} <Debug> MemoryTracker: Query peak memory usage: 3.98 MiB.
2025.06.04 15:29:49.191050 [ 86 ] {} <Debug> HTTP-Session-51d278db-0785-45f4-b7e5-d09b98dfdd71: 51d278db-0785-45f4-b7e5-d09b98dfdd71 Logout, user_id: 94309d50-4f52-5250-31bd-74fecac179db
2025.06.04 15:29:49.422543 [ 332 ] {} <Trace> default.my_table (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Didn't start merge: There are no parts that can be merged. (Collector returned empty ranges set)

Node 2

2025.06.04 15:29:49.124424 [ 756 ] {} <Debug> DDLWorker: Cleaning queue
2025.06.04 15:29:49.135593 [ 752 ] {} <Debug> DDLWorker: Scheduling tasks
2025.06.04 15:29:49.137055 [ 752 ] {} <Trace> DDLWorker: scheduleTasks: initialized=true, size_before_filtering=12, queue_size=12, entries=query-0000000000..query-0000000011, first_failed_task_name=none, current_tasks_size=1, last_current_task=query-0000000010, last_skipped_entry_name=none
2025.06.04 15:29:49.137124 [ 752 ] {} <Debug> DDLWorker: Will schedule 1 tasks starting from query-0000000011
2025.06.04 15:29:49.137228 [ 752 ] {} <Trace> DDLWorker: Checking task query-0000000011
2025.06.04 15:29:49.139822 [ 752 ] {} <Debug> DDLWorker: Processing task query-0000000011 (query: CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple(), backup restore: false)
2025.06.04 15:29:49.142832 [ 752 ] {} <Debug> DDLWorker: Executing query: CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple()
2025.06.04 15:29:49.143175 [ 752 ] {e1c2f757-5e63-4096-bbc1-e37b24256213} <Debug> executeQuery: (from 0.0.0.0:0, user: , initial_query_id: 90011af9-a67d-42d3-81d1-4afcaf28c774) /* ddl_entry=query-0000000011 */ CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple() (stage: Complete)
2025.06.04 15:29:49.143644 [ 752 ] {e1c2f757-5e63-4096-bbc1-e37b24256213} <Trace> IInterpreterUnionOrSelectQuery: The new analyzer is enabled, but the old interpreter is used. It can be a bug, please report it. Will disable 'allow_experimental_analyzer' setting (for query: SELECT count() SETTINGS aggregate_functions_null_for_empty = false, transform_null_in = false, legacy_column_name_of_tuple_literal = false)
2025.06.04 15:29:49.148226 [ 752 ] {e1c2f757-5e63-4096-bbc1-e37b24256213} <Debug> default._tmp_replace_17c18f4e95824e5c_alihbece60871eae (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Loading data parts
2025.06.04 15:29:49.148678 [ 752 ] {e1c2f757-5e63-4096-bbc1-e37b24256213} <Debug> default._tmp_replace_17c18f4e95824e5c_alihbece60871eae (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Loaded data parts (0 items) took 0.000448594 seconds
2025.06.04 15:29:49.151586 [ 484 ] {} <Trace> default._tmp_replace_17c18f4e95824e5c_alihbece60871eae (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Didn't start merge: There are no parts that can be merged. (Collector returned empty ranges set)
2025.06.04 15:29:49.152293 [ 752 ] {e1c2f757-5e63-4096-bbc1-e37b24256213} <Debug> DDLWorker: Executed query: /* ddl_entry=query-0000000011 */ CREATE OR REPLACE TABLE default.my_table UUID '7f61fe7d-c468-47dc-adfb-7ac6a1fe4368' (`a` String, `b` String) ENGINE = MergeTree ORDER BY tuple()
2025.06.04 15:29:49.153568 [ 752 ] {} <Debug> DDLWorker: Waiting for queue updates
2025.06.04 15:29:49.317045 [ 489 ] {} <Trace> default.my_table (7f61fe7d-c468-47dc-adfb-7ac6a1fe4368): Didn't start merge: There are no parts that can be merged. (Collector returned empty ranges set)

Configuration

Environment

  • clickhouse-connect version: 0.8.17
  • Python version: 3.9
  • Operating system: Ubuntu

ClickHouse server

  • ClickHouse Server version: 25.5.2.47
  • ClickHouse Server non-default settings, if any:

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingminor

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions