Skip to content

Commit 9f1e03c

Browse files
committed
update changelog
1 parent 392456a commit 9f1e03c

1 file changed

Lines changed: 20 additions & 0 deletions

File tree

CHANGELOG.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,25 @@
11
# Changelog
22

3+
## 2.0.0
4+
5+
### Breaking Changes
6+
7+
- **Removed the `row_limit` parameter from `read_dlo()` and `read_dmo()`.**
8+
9+
These methods no longer accept a `row_limit` argument. When running locally, reads are automatically capped at 1000 rows to prevent accidentally fetching large datasets during development. When deployed to Data Cloud, no limit is applied and all records are returned.
10+
11+
**Why:** The `row_limit` parameter duplicated PySpark's built-in `.limit()` and created a behavioral difference between local and deployed environments. The 1000-row safety net is now handled internally via the `default_row_limit` setting in `config.yaml`, and deployed environments naturally omit it.
12+
13+
**Migration:** Remove any `row_limit` arguments from your `read_dlo()` and `read_dmo()` calls. If you need a specific number of rows, use PySpark's `.limit()` on the returned DataFrame:
14+
15+
```python
16+
# Before
17+
df = client.read_dlo("MyObject__dll", row_limit=500)
18+
19+
# After
20+
df = client.read_dlo("MyObject__dll").limit(500)
21+
```
22+
323
## 1.0.0
424

525
### Breaking Changes

0 commit comments

Comments
 (0)