You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an advanced workflow and specifically designed for extremely large incremental models that take a long time to run even during development. It solves for:
998
998
999
-
- codify this especially the commented out steps
999
+
- Transformaing data with constant schema evolution in json and nested array data types.
1000
+
- Retaining history of a calculated column and applying a new calculation to new rows going forward.
1001
+
1002
+
When you apply the plan to `prod` after the dev worfklow, it will NOT backfill historical data. It will only execute model batches **forward only** for new intervals (new rows).
1003
+
1004
+
=== "SQLMesh"
1005
+
1006
+
```bash
1007
+
sqlmesh plan dev --forward-only
1008
+
```
1009
+
1010
+
```bash
1011
+
sqlmesh plan <environment> --forward-only
1012
+
```
1013
+
1014
+
=== "Tobiko Cloud"
1015
+
1016
+
```bash
1017
+
tcloud sqlmesh plan dev --forward-only
1018
+
```
1019
+
1020
+
```bash
1021
+
tcloud sqlmesh plan <environment> --forward-only
1022
+
```
1023
+
1024
+
??? "Example Output"
1025
+
1026
+
- I applied a change to a new column
1027
+
- It impacts 2 downstream models
1028
+
- I enforced a forward-only plan to avoid backfilling historical data
1029
+
- I previewed the changes in a clone of the models impacted (clones will NOT be reused in production)
When this is applied to `prod`, it will only execute model batches for new intervals (new rows). That's why you notice it only made a virtual layer update for this example.
0 commit comments