-
**Added
runtime: datacustomcode.runtime.function.Runtimeto function contract for codeTypefunction.Function now mandates runtime as arguments.
Why:
runTimeallows access to resources ( llm_gateway / file ) available during function execution.Migration: use function(request: dict, runTime: Runtime) instead od function(request: dict)
# Before def function(request: dict): pass # After def function(request: dict, runTime: Runtime): pass
-
Removed the
row_limitparameter fromread_dlo()andread_dmo().These methods no longer accept a
row_limitargument. When running locally, reads are automatically capped at 1000 rows to prevent accidentally fetching large datasets during development. When deployed to Data Cloud, no limit is applied and all records are returned.Why: The
row_limitparameter duplicated PySpark's built-in.limit()and created a behavioral difference between local and deployed environments. The 1000-row safety net is now handled internally via thedefault_row_limitsetting inconfig.yaml, and deployed environments naturally omit it.Migration: Remove any
row_limitarguments from yourread_dlo()andread_dmo()calls. If you need a specific number of rows, use PySpark's.limit()on the returned DataFrame:# Before df = client.read_dlo("MyObject__dll", row_limit=500) # After df = client.read_dlo("MyObject__dll").limit(500)
-
read_dlo()andread_dmo()now return DataFrames with all-lowercase column names.Column names returned by both
QueryAPIDataCloudReaderandSFCLIDataCloudReaderare now lowercased to match the column names produced by the deployed Data Cloud environment (e.g.,unitprice__cinstead ofUnitPrice__c).Why: In the deployed environment, column names are normalized to lowercase by the underlying Iceberg metadata layer. The local SDK previously returned the original API casing, causing "column does not exist" errors when scripts were deployed. This change aligns local behavior with the cloud.
Migration: Update any column references in your local scripts to use lowercase:
# Before df.withColumn("Description__c", upper(col("Description__c"))) df.drop("KQ_Id__c") df["UnitPrice__c"] # After df.withColumn("description__c", upper(col("description__c"))) df.drop("kq_id__c") df["unitprice__c"]
Scripts already running in Data Cloud are unaffected — the cloud always returned lowercase column names.