Skip to content

Commit 9a0cde1

Browse files
Merge branch 'TobikoData:main' into main
2 parents 58e18bd + 65da025 commit 9a0cde1

182 files changed

Lines changed: 13900 additions & 2406 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.circleci/continue_config.yml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,6 @@ jobs:
152152
- image: cimg/node:20.19.0
153153
resource_class: small
154154
steps:
155-
- halt_unless_client
156155
- checkout
157156
- restore_cache:
158157
name: Restore pnpm Package Cache

.github/dependabot.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,7 @@ updates:
44
directory: '/'
55
schedule:
66
interval: 'weekly'
7+
- package-ecosystem: 'github-actions'
8+
directory: '/'
9+
schedule:
10+
interval: 'weekly'

.github/workflows/pr.yaml

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
on:
2+
push:
3+
branches:
4+
- main
5+
pull_request:
6+
branches:
7+
- main
8+
concurrency:
9+
group: 'pr-${{ github.event.pull_request.number }}'
10+
cancel-in-progress: true
11+
jobs:
12+
test-vscode:
13+
env:
14+
PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD: 1
15+
runs-on: ubuntu-latest
16+
steps:
17+
- uses: actions/checkout@v4
18+
- uses: actions/setup-node@v4
19+
with:
20+
node-version: '20'
21+
- uses: pnpm/action-setup@v4
22+
with:
23+
version: latest
24+
- name: Install dependencies
25+
run: pnpm install
26+
- name: Run CI
27+
run: pnpm run ci
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Incident Reporting
2+
3+
We monitor Tobiko Cloud 24/7 to ensure your projects are running smoothly.
4+
5+
If you encounter any issues, however, you can report incidents directly in Tobiko Cloud itself.
6+
7+
This will notify our support team, who will investigate and resolve the issue as quickly as possible.
8+
9+
### Reporting an incident
10+
11+
Follow these steps to report an incident in Tobiko Cloud:
12+
13+
1. Visit the [Tobiko Cloud Incident Reporting Page](https://incidents.tobikodata.com/)
14+
2. Select one of the three severity levels for your incident
15+
3. Enter the project name the incident is related to
16+
* The project name is displayed after your organization name in the Cloud UI
17+
4. Write a detailed description of the incident
18+
* Include all relevant information that will help our support team understand and resolve the issue
19+
5. Click the `Submit` button to send your incident report
20+
6. You will receive a confirmation message indicating that your incident has been reported successfully
21+
7. You will hear from our support team after submitting the incident report
22+
23+
![Tobiko Cloud incident reporting page](./incident_reporting/incident_reporting.png)
24+
25+
### Reporting an incident when SSO is unavailable
26+
27+
Single Sign-On (SSO) is the default way to log in to Tobiko Cloud. However, SSO could be down or not working when you need to report an incident.
28+
29+
Tobiko Cloud provides a standalone page that doesn't require SSO so you can report an incident when SSO is not working. The page is unique to your organization.
30+
31+
The standalone URL is available in the incident reporting page when you log in with SSO. Because accessing the standalone URL does not require SSO, you should only share it with staff authorized to report incidents.
32+
33+
To store your standalone incident reporting URL:
34+
35+
1. Visit the [Tobiko Cloud Incident Reporting Page](https://incidents.tobikodata.com/)
36+
2. Click the `Copy Standalone URL` button below the incident reporting section
37+
3. Save this URL in an easily accessible location in case you need to report an incident when SSO is not working
38+
39+
!!! note "Don't wait!"
40+
We recommend copying this URL *right now* so your organization is protected from difficulty reporting an incident.
41+
42+
### SSO not enabled for your organization
43+
44+
SSO login is required for accessing the standalone incident reporting URL.
45+
46+
SSO is enabled by default in Tobiko Cloud. If it is not enabled for your organization, contact your solution architect and ask them to provide you with a standalone incident reporting URL.
12.3 KB
Loading

docs/concepts/audits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -639,7 +639,7 @@ MODEL (
639639
You can execute audits with the `sqlmesh audit` command as follows:
640640

641641
```bash
642-
$ sqlmesh -p project audit -start 2022-01-01 -end 2022-01-02
642+
$ sqlmesh -p project audit --start 2022-01-01 --end 2022-01-02
643643
Found 1 audit(s).
644644
assert_item_price_is_not_null FAIL.
645645

docs/guides/configuration.md

Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -381,6 +381,117 @@ Example showing default values:
381381
)
382382
```
383383

384+
385+
### Always comparing against production
386+
387+
By default, SQLMesh compares the current state of project files to the target `<env>` environment when `sqlmesh plan <env>` is run. However, a common expectation is that local changes should always be compared to the production environment.
388+
389+
The `always_recreate_environment` boolean plan option can alter this behavior. When enabled, SQLMesh will always attempt to compare against the production environment by recreating the target environment; If `prod` does not exist, SQLMesh will fall back to comparing against the target environment.
390+
391+
**NOTE:**: Upon succesfull plan application, changes are still promoted to the target `<env>` environment.
392+
393+
=== "YAML"
394+
395+
```yaml linenums="1"
396+
plan:
397+
always_recreate_environment: True
398+
```
399+
400+
=== "Python"
401+
402+
```python linenums="1"
403+
from sqlmesh.core.config import (
404+
Config,
405+
ModelDefaultsConfig,
406+
PlanConfig,
407+
)
408+
409+
config = Config(
410+
model_defaults=ModelDefaultsConfig(dialect=<dialect>),
411+
plan=PlanConfig(
412+
always_recreate_environment=True,
413+
),
414+
)
415+
```
416+
417+
#### Change Categorization Example
418+
419+
Consider this scenario with `always_recreate_environment` enabled:
420+
421+
1. Initial state in `prod`:
422+
```sql
423+
MODEL (name sqlmesh_example.test_model, kind FULL);
424+
SELECT 1 AS col
425+
```
426+
427+
1. First (breaking) change in `dev`:
428+
```sql
429+
MODEL (name sqlmesh_example__dev.test_model, kind FULL);
430+
SELECT 2 AS col
431+
```
432+
433+
??? "Output plan example #1"
434+
435+
```bash
436+
New environment `dev` will be created from `prod`
437+
438+
Differences from the `prod` environment:
439+
440+
Models:
441+
└── Directly Modified:
442+
└── sqlmesh_example__dev.test_model
443+
444+
---
445+
+++
446+
447+
448+
kind FULL
449+
)
450+
SELECT
451+
- 1 AS col
452+
+ 2 AS col
453+
```
454+
455+
3. Second (metadata) change in `dev`:
456+
```sql
457+
MODEL (name sqlmesh_example__dev.test_model, kind FULL, owner 'John Doe');
458+
SELECT 5 AS col
459+
```
460+
461+
??? "Output plan example #2"
462+
463+
```bash
464+
New environment `dev` will be created from `prod`
465+
466+
Differences from the `prod` environment:
467+
468+
Models:
469+
└── Directly Modified:
470+
└── sqlmesh_example__dev.test_model
471+
472+
---
473+
474+
+++
475+
476+
@@ -1,8 +1,9 @@
477+
478+
MODEL (
479+
name sqlmesh_example.test_model,
480+
+ owner "John Doe",
481+
kind FULL
482+
)
483+
SELECT
484+
- 1 AS col
485+
+ 2 AS col
486+
487+
Directly Modified: sqlmesh_example__dev.test_model (Breaking)
488+
Models needing backfill:
489+
└── sqlmesh_example__dev.test_model: [full refresh]
490+
```
491+
492+
Even though the second change should have been a metadata change (thus not requiring a backfill), it will still be classified as a breaking change because the comparison is against production instead of the previous development state. This is intentional and may cause additional backfills as more changes are accumulated.
493+
494+
384495
### Gateways
385496
386497
The `gateways` configuration defines how SQLMesh should connect to the data warehouse, state backend, and scheduler. These options are in the [gateway](../reference/configuration.md#gateway) section of the configuration reference page.

docs/guides/tablediff.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -253,12 +253,12 @@ Then, specify each table's gateway in the `table_diff` command with this syntax:
253253
For example, we could diff the `landing.table` table across `bigquery` and `snowflake` gateways like this:
254254

255255
```sh
256-
$ sqlmesh table_diff 'bigquery|landing.table:snowflake|lake.table'
256+
$ tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|lake.table'
257257
```
258258

259259
This syntax tells SQLMesh to use the cross-database diffing algorithm instead of the normal within-database diffing algorithm.
260260

261-
After adding gateways to the table names, use `table_diff` as described above - the same options apply for specifying the join keys, decimal precision, etc. See `sqlmesh table_diff --help` for a [full list of options](../reference/cli.md#table_diff).
261+
After adding gateways to the table names, use `table_diff` as described above - the same options apply for specifying the join keys, decimal precision, etc. See `tcloud sqlmesh table_diff --help` for a [full list of options](../reference/cli.md#table_diff).
262262

263263
!!! warning
264264

@@ -273,7 +273,7 @@ A cross-database diff is broken up into two stages.
273273
The first stage is a schema diff. This example shows that differences in column name case across the two tables are identified as schema differences:
274274

275275
```bash
276-
$ sqlmesh table_diff 'bigquery|sqlmesh_example.full_model:snowflake|sqlmesh_example.full_model' --on item_id --show-sample
276+
$ tcloud sqlmesh table_diff 'bigquery|sqlmesh_example.full_model:snowflake|sqlmesh_example.full_model' --on item_id --show-sample
277277

278278
Schema Diff Between 'BIGQUERY|SQLMESH_EXAMPLE.FULL_MODEL' and 'SNOWFLAKE|SQLMESH_EXAMPLE.FULL_MODEL':
279279
├── Added Columns:

docs/guides/vscode.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -139,6 +139,16 @@ The SQLMesh VSCode extension provides the following commands in the VSCode comma
139139

140140
## Troubleshooting
141141

142+
### DuckDB concurrent access
143+
144+
If your SQLMesh project uses DuckDB to store its state, you will likely encounter problems.
145+
146+
SQLMesh can create multiple connections to the state database, but DuckDB's local database file does not support concurrent access.
147+
148+
Because the VSCode extension establishes a long-running process connected to the database, access conflicts are more likely than with standard SQLMesh usage from the CLI.
149+
150+
Therefore, we do not recommend using DuckDB as a state store with the VSCode extension.
151+
142152
### Python environment woes
143153

144154
The most common problem is the extension not using the correct Python interpreter.

docs/integrations/engines/azuresql.md

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,29 +2,35 @@
22

33
[Azure SQL](https://azure.microsoft.com/en-us/products/azure-sql) is "a family of managed, secure, and intelligent products that use the SQL Server database engine in the Azure cloud."
44

5-
The Azure SQL adapter only supports authentication with a username and password. It does not support authentication with Microsoft Entra or Azure Active Directory.
6-
75
## Local/Built-in Scheduler
86
**Engine Adapter Type**: `azuresql`
97

108
### Installation
9+
#### User / Password Authentication:
1110
```
1211
pip install "sqlmesh[azuresql]"
1312
```
13+
#### Microsoft Entra ID / Azure Active Directory Authentication:
14+
```
15+
pip install "sqlmesh[azuresql-odbc]"
16+
```
1417

1518
### Connection options
1619

1720
| Option | Description | Type | Required |
1821
| ----------------- | ---------------------------------------------------------------- | :----------: | :------: |
1922
| `type` | Engine type name - must be `azuresql` | string | Y |
2023
| `host` | The hostname of the Azure SQL server | string | Y |
21-
| `user` | The username to use for authentication with the Azure SQL server | string | N |
22-
| `password` | The password to use for authentication with the Azure SQL server | string | N |
24+
| `user` | The username / client ID to use for authentication with the Azure SQL server | string | N |
25+
| `password` | The password / client secret to use for authentication with the Azure SQL server | string | N |
2326
| `port` | The port number of the Azure SQL server | int | N |
2427
| `database` | The target database | string | N |
2528
| `charset` | The character set used for the connection | string | N |
2629
| `timeout` | The query timeout in seconds. Default: no timeout | int | N |
2730
| `login_timeout` | The timeout for connection and login in seconds. Default: 60 | int | N |
2831
| `appname` | The application name to use for the connection | string | N |
2932
| `conn_properties` | The list of connection properties | list[string] | N |
30-
| `autocommit` | Is autocommit mode enabled. Default: false | bool | N |
33+
| `autocommit` | Is autocommit mode enabled. Default: false | bool | N |
34+
| `driver` | The driver to use for the connection. Default: pymssql | string | N |
35+
| `driver_name` | The driver name to use for the connection. E.g., *ODBC Driver 18 for SQL Server* | string | N |
36+
| `odbc_properties` | The dict of ODBC connection properties. E.g., authentication: ActiveDirectoryServicePrincipal. See more [here](https://learn.microsoft.com/en-us/sql/connect/odbc/dsn-connection-string-attribute?view=sql-server-ver16). | dict | N |

0 commit comments

Comments
 (0)