Skip to content

Commit fb14219

Browse files
committed
Add CHANGELOG with migration documentation for lowercase columns
1 parent 016b4f8 commit fb14219

1 file changed

Lines changed: 27 additions & 0 deletions

File tree

CHANGELOG.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# Changelog
2+
3+
## 1.0.0
4+
5+
### Breaking Changes
6+
7+
- **`read_dlo()` and `read_dmo()` now return DataFrames with all-lowercase column names.**
8+
9+
Column names returned by both `QueryAPIDataCloudReader` and `SFCLIDataCloudReader` are now lowercased to match the column names produced by the deployed Data Cloud environment (e.g., `unitprice__c` instead of `UnitPrice__c`).
10+
11+
**Why:** In the deployed environment, column names are normalized to lowercase by the underlying Iceberg metadata layer. The local SDK previously returned the original API casing, causing "column does not exist" errors when scripts were deployed. This change aligns local behavior with the cloud.
12+
13+
**Migration:** Update any column references in your local scripts to use lowercase:
14+
15+
```python
16+
# Before
17+
df.withColumn("Description__c", upper(col("Description__c")))
18+
df.drop("KQ_Id__c")
19+
df["UnitPrice__c"]
20+
21+
# After
22+
df.withColumn("description__c", upper(col("description__c")))
23+
df.drop("kq_id__c")
24+
df["unitprice__c"]
25+
```
26+
27+
Scripts already running in Data Cloud are unaffected — the cloud always returned lowercase column names.

0 commit comments

Comments
 (0)