Skip to content

Commit 6a21e5f

Browse files
committed
(docs): removing shell indicators
Signed-off-by: Euan <e.blackledge@stuart.com>
1 parent da2e5fc commit 6a21e5f

11 files changed

Lines changed: 44 additions & 43 deletions

File tree

docs/cloud/features/scheduler/airflow.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Start by installing the `tobiko-cloud-scheduler-facade` library in your Airflow
3535
Make sure to include the `[airflow]` extra in the installation command:
3636

3737
``` bash
38-
$ pip install tobiko-cloud-scheduler-facade[airflow]
38+
pip install tobiko-cloud-scheduler-facade[airflow]
3939
```
4040

4141
!!! info "Mac Users"

docs/cloud/features/scheduler/dagster.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ dependencies = [
4848
And then install it into the Python environment used by your Dagster project:
4949

5050
```sh
51-
$ pip install -e '.[dev]'
51+
pip install -e '.[dev]'
5252
```
5353

5454
### Connect Dagster to Tobiko Cloud

docs/cloud/features/security/single_sign_on.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ Here is what you will see if you are accessing Tobiko Cloud via Okta. Click on t
145145
You can see what the status of your session is with the `status` command:
146146

147147
``` bash
148-
$ tcloud auth status
148+
tcloud auth status
149149
```
150150

151151

@@ -156,7 +156,7 @@ $ tcloud auth status
156156
Run the `login` command to begin the login process:
157157

158158
``` bash
159-
$ tcloud auth login
159+
tcloud auth login
160160
```
161161

162162
![tcloud_login](./single_sign_on/tcloud_login.png)
@@ -183,11 +183,11 @@ Current Tobiko Cloud SSO session expires in 1439 minutes
183183
In order to delete your session information you can use the log out command:
184184

185185
``` bash
186-
> tcloud auth logout
187-
Logged out of Tobiko Cloud
186+
tcloud auth logout
187+
# Logged out of Tobiko Cloud
188188
189-
> tcloud auth status
190-
Not currently authenticated
189+
tcloud auth status
190+
# Not currently authenticated
191191
```
192192

193193
![tcloud_logout](./single_sign_on/tcloud_logout.png)

docs/cloud/features/xdb_diffing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Then, specify each table's gateway in the `table_diff` command with this syntax:
4747
For example, we could diff the `landing.table` table across `bigquery` and `snowflake` gateways like this:
4848

4949
```sh
50-
$ tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table'
50+
tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table'
5151
```
5252

5353
This syntax tells SQLMesh to use the cross-database diffing algorithm instead of the normal within-database diffing algorithm.

docs/concepts/state.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ The state file is a simple `json` file that looks like:
9292
You can export a specific environment like so:
9393

9494
```sh
95-
$ sqlmesh state export --environment my_dev -o my_dev_state.json
95+
sqlmesh state export --environment my_dev -o my_dev_state.json
9696
```
9797

9898
Note that every snapshot that is part of the environment will be exported, not just the differences from `prod`. The reason for this is so that the environment can be fully imported elsewhere without any assumptions about which snapshots are already present in state.
@@ -102,7 +102,7 @@ Note that every snapshot that is part of the environment will be exported, not j
102102
You can export local state like so:
103103

104104
```bash
105-
$ sqlmesh state export --local -o local_state.json
105+
sqlmesh state export --local -o local_state.json
106106
```
107107

108108
This essentially just exports the state of the local context which includes local changes that have not been applied to any virtual data environments.
@@ -174,10 +174,11 @@ If your project has [multiple gateways](../guides/configuration.md#gateways) wit
174174

175175
```bash
176176
# state export
177-
$ sqlmesh --gateway <gateway> state export -o state.json
178-
177+
sqlmesh --gateway <gateway> state export -o state.json
178+
```
179+
```bash
179180
# state import
180-
$ sqlmesh --gateway <gateway> state import -i state.json
181+
sqlmesh --gateway <gateway> state import -i state.json
181182
```
182183

183184
## Version Compatibility

docs/guides/configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@ gateways:
269269
We can override the `dummy_pw` value with the true password `real_pw` by creating the environment variable. This example demonstrates creating the variable with the bash `export` function:
270270

271271
```bash
272-
$ export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw"
272+
export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw"
273273
```
274274

275275
After the initial string `SQLMESH__`, the environment variable name components move down the key hierarchy in the YAML specification: `GATEWAYS` --> `MY_GATEWAY` --> `CONNECTION` --> `PASSWORD`.
@@ -1492,7 +1492,7 @@ Example enabling debug mode for the CLI command `sqlmesh plan`:
14921492
=== "Bash"
14931493

14941494
```bash
1495-
$ SQLMESH_DEBUG=1 sqlmesh plan
1495+
SQLMESH_DEBUG=1 sqlmesh plan
14961496
```
14971497

14981498
=== "MS Powershell"

docs/guides/migrations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ SQLMeshError: SQLMesh (local) is using version '1' which is behind '2' (remote).
2828
The project metadata can be migrated to the latest metadata format using SQLMesh's migrate command.
2929

3030
```bash
31-
> sqlmesh migrate
31+
sqlmesh migrate
3232
```
3333

3434
Migration should be issued manually by a single user and the migration will affect all users of the project.

docs/integrations/dbt.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,19 +19,19 @@ Therefore, SQLMesh is packaged with multiple "extras," which you may optionally
1919
At minimum, using the SQLMesh dbt adapter requires installing the dbt extra:
2020

2121
```bash
22-
> pip install "sqlmesh[dbt]"
22+
pip install "sqlmesh[dbt]"
2323
```
2424

2525
If your project uses any SQL execution engine other than DuckDB, you must install the extra for that engine. For example, if your project runs on the Postgres SQL engine:
2626

2727
```bash
28-
> pip install "sqlmesh[dbt,postgres]"
28+
pip install "sqlmesh[dbt,postgres]"
2929
```
3030

3131
If you would like to use the [SQLMesh Browser UI](../guides/ui.md) to view column-level lineage, include the `web` extra:
3232

3333
```bash
34-
> pip install "sqlmesh[dbt,web]"
34+
pip install "sqlmesh[dbt,web]"
3535
```
3636

3737
Learn more about [SQLMesh installation and extras here](../installation.md#install-extras).
@@ -41,7 +41,7 @@ Learn more about [SQLMesh installation and extras here](../installation.md#insta
4141
Prepare an existing dbt project to be run by SQLMesh by executing the `sqlmesh init` command *within the dbt project root directory* and with the `dbt` template option:
4242

4343
```bash
44-
$ sqlmesh init -t dbt
44+
sqlmesh init -t dbt
4545
```
4646

4747
This will create a file called `sqlmesh.yaml` containing the [default model start date](../reference/model_configuration.md#model-defaults). This configuration file is a minimum starting point for enabling SQLMesh to work with your DBT project.
@@ -247,8 +247,8 @@ Instead, SQLMesh provides predefined time macro variables that can be used in th
247247
For example, the SQL `WHERE` clause with the "ds" column goes in a new jinja block gated by `{% if sqlmesh_incremental is defined %}` as follows:
248248

249249
```bash
250-
> WHERE
251-
> ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}'
250+
WHERE
251+
ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}'
252252
```
253253

254254
`{{ start_ds }}` and `{{ end_ds }}` are the jinja equivalents of SQLMesh's `@start_ds` and `@end_ds` predefined time macro variables. See all [predefined time variables](../concepts/macros/macro_variables.md) available in jinja.

docs/integrations/dlt.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ SQLMesh enables efforless project generation using data ingested through [dlt](h
88
To load data from a dlt pipeline into SQLMesh, ensure the dlt pipeline has been run or restored locally. Then simply execute the sqlmesh `init` command *within the dlt project root directory* using the `dlt` template option and specifying the pipeline's name with the `dlt-pipeline` option:
99

1010
```bash
11-
$ sqlmesh init -t dlt --dlt-pipeline <pipeline-name> dialect
11+
sqlmesh init -t dlt --dlt-pipeline <pipeline-name> dialect
1212
```
1313

1414
This will create the configuration file and directories, which are found in all SQLMesh projects:
@@ -33,7 +33,7 @@ SQLMesh will also automatically generate models to ingest data from the pipeline
3333
The default location for dlt pipelines is `~/.dlt/pipelines/<pipeline_name>`. If your pipelines are in a [different directory](https://dlthub.com/docs/general-usage/pipeline#separate-working-environments-with-pipelines_dir), use the `--dlt-path` argument to specify the path explicitly:
3434

3535
```bash
36-
$ sqlmesh init -t dlt --dlt-pipeline <pipeline-name> --dlt-path <pipelines-directory> dialect
36+
sqlmesh init -t dlt --dlt-pipeline <pipeline-name> --dlt-path <pipelines-directory> dialect
3737
```
3838

3939
### Generating models on demand
@@ -43,25 +43,25 @@ To update the models in your SQLMesh project on demand, use the `dlt_refresh` co
4343
- **Generate all missing tables**:
4444

4545
```bash
46-
$ sqlmesh dlt_refresh <pipeline-name>
46+
sqlmesh dlt_refresh <pipeline-name>
4747
```
4848

4949
- **Generate all missing tables and overwrite existing ones** (use with `--force` or `-f`):
5050

5151
```bash
52-
$ sqlmesh dlt_refresh <pipeline-name> --force
52+
sqlmesh dlt_refresh <pipeline-name> --force
5353
```
5454

5555
- **Generate specific dlt tables** (using `--table` or `-t`):
5656

5757
```bash
58-
$ sqlmesh dlt_refresh <pipeline-name> --table <dlt-table>
58+
sqlmesh dlt_refresh <pipeline-name> --table <dlt-table>
5959
```
6060

6161
- **Provide the explicit path to the pipelines directory** (using `--dlt-path`):
6262

6363
```bash
64-
$ sqlmesh dlt_refresh <pipeline-name> --dlt-path <pipelines-directory>
64+
sqlmesh dlt_refresh <pipeline-name> --dlt-path <pipelines-directory>
6565
```
6666

6767
#### Configuration
@@ -83,7 +83,7 @@ Load package 1728074157.660565 is LOADED and contains no failed jobs
8383
After the pipeline has run, generate a SQLMesh project by executing:
8484

8585
```bash
86-
$ sqlmesh init -t dlt --dlt-pipeline sushi duckdb
86+
sqlmesh init -t dlt --dlt-pipeline sushi duckdb
8787
```
8888

8989
Then the SQLMesh project is all set up. You can then proceed to run the SQLMesh `plan` command to ingest the dlt pipeline data and populate the SQLMesh tables:

docs/integrations/engines/bigquery.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Follow the [quickstart installation guide](../../installation.md) up to the step
2222
Instead of installing just SQLMesh core, we will also include the BigQuery engine libraries:
2323

2424
```bash
25-
> pip install "sqlmesh[bigquery]"
25+
pip install "sqlmesh[bigquery]"
2626
```
2727

2828
### Install Google Cloud SDK
@@ -35,19 +35,19 @@ Follow these steps to install and configure the Google Cloud SDK on your compute
3535
- Unpack the downloaded file with the `tar` command:
3636

3737
```bash
38-
> tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz
38+
tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz
3939
```
4040

4141
- Run the installation script:
4242

4343
```bash
44-
> ./google-cloud-sdk/install.sh
44+
./google-cloud-sdk/install.sh
4545
```
4646

4747
- Reload your shell profile (e.g., for zsh):
4848

4949
```bash
50-
> source $HOME/.zshrc
50+
source $HOME/.zshrc
5151
```
5252

5353
- Run [`gcloud init` to setup authentication](https://cloud.google.com/sdk/gcloud/reference/init)
@@ -114,7 +114,7 @@ The output will look something like this:
114114
We've verified our connection, so we're ready to create and execute a plan in BigQuery:
115115

116116
```bash
117-
> sqlmesh plan
117+
sqlmesh plan
118118
```
119119

120120
### View results in BigQuery Console

0 commit comments

Comments
 (0)