sections_fts
610 rows
This data as json, CSV (advanced)
Link | rowid ▼ | title | content | sections_fts | rank |
---|---|---|---|---|---|
1 | 1 | Introspection | Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON. | 698 | |
2 | 2 | /-/metadata | Shows the contents of the metadata.json file that was passed to datasette serve , if any. Metadata example : { "license": "CC Attribution 4.0 License", "license_url": "http://creativecommons.org/licenses/by/4.0/", "source": "fivethirtyeight/data on GitHub", "source_url": "https://github.com/fivethirtyeight/data", "title": "Five Thirty Eight", "databases": { } } | 698 | |
3 | 3 | /-/versions | Shows the version of Datasette, Python and SQLite. Versions example : { "datasette": { "version": "0.60" }, "python": { "full": "3.8.12 (default, Dec 21 2021, 10:45:09) \n[GCC 10.2.1 20210110]", "version": "3.8.12" }, "sqlite": { "extensions": { "json1": null }, "fts_versions": [ "FTS5", "FTS4", "FTS3" ], "compile_options": [ "COMPILER=gcc-6.3.0 20170516", "ENABLE_FTS3", "ENABLE_FTS4", "ENABLE_FTS5", "ENABLE_JSON1", "ENABLE_RTREE", "THREADSAFE=1" ], "version": "3.37.0" } } | 698 | |
4 | 4 | /-/plugins | Shows a list of currently installed plugins and their versions. Plugins example : [ { "name": "datasette_cluster_map", "static": true, "templates": false, "version": "0.10", "hooks": ["extra_css_urls", "extra_js_urls", "extra_body_script"] } ] Add ?all=1 to include details of the default plugins baked into Datasette. | 698 | |
5 | 5 | /-/settings | Shows the Settings for this instance of Datasette. Settings example : { "default_facet_size": 30, "default_page_size": 100, "facet_suggest_time_limit_ms": 50, "facet_time_limit_ms": 1000, "max_returned_rows": 1000, "sql_time_limit_ms": 1000 } | 698 | |
6 | 6 | /-/config | Shows the configuration for this instance of Datasette. This is generally the contents of the datasette.yaml or datasette.json file, which can include plugin configuration as well. Config example : { "settings": { "template_debug": true, "trace_debug": true, "force_https_urls": true } } Any keys that include the one of the following substrings in their names will be returned as redacted *** output, to help avoid accidentally leaking private configuration information: secret , key , password , token , hash , dsn . | 698 | |
7 | 7 | /-/databases | Shows currently attached databases. Databases example : [ { "hash": null, "is_memory": false, "is_mutable": true, "name": "fixtures", "path": "fixtures.db", "size": 225280 } ] | 698 | |
8 | 8 | /-/threads | Shows details of threads and asyncio tasks. Threads example : { "num_threads": 2, "threads": [ { "daemon": false, "ident": 4759197120, "name": "MainThread" }, { "daemon": true, "ident": 123145319682048, "name": "Thread-1" }, ], "num_tasks": 3, "tasks": [ "<Task pending coro=<RequestResponseCycle.run_asgi() running at uvicorn/protocols/http/httptools_impl.py:385> cb=[set.discard()]>", "<Task pending coro=<Server.serve() running at uvicorn/main.py:361> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10365c3d0>()]> cb=[run_until_complete.<locals>.<lambda>()]>", "<Task pending coro=<LifespanOn.main() running at uvicorn/lifespan/on.py:48> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x10364f050>()]>>" ] } | 698 | |
9 | 9 | /-/actor | Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. { "actor": { "id": 1, "username": "some-user" } } | 698 | |
10 | 10 | /-/messages | The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature. | 698 | |
11 | 11 | Metadata | Data loves metadata. Any time you run Datasette you can optionally include a YAML or JSON file with metadata about your databases and tables. Datasette will then display that information in the web UI. Run Datasette like this: datasette database1.db database2.db --metadata metadata.yaml Your metadata.yaml file can look something like this: [[[cog from metadata_doc import metadata_example metadata_example(cog, { "title": "Custom title for your index page", "description": "Some description text can go here", "license": "ODbL", "license_url": "https://opendatacommons.org/licenses/odbl/", "source": "Original Data Source", "source_url": "http://example.com/" }) ]]] [[[end]]] Choosing YAML over JSON adds support for multi-line strings and comments. The above metadata will be displayed on the index page of your Datasette-powered site. The source and license information will also be included in the footer of every page served by Datasette. Any special HTML characters in description will be escaped. If you want to include HTML in your description, you can use a description_html property instead. | 698 | |
12 | 12 | Per-database and per-table metadata | Metadata at the top level of the file will be shown on the index page and in the footer on every page of the site. The license and source is expected to apply to all of your data. You can also provide metadata at the per-database or per-table level, like this: [[[cog metadata_example(cog, { "databases": { "database1": { "source": "Alternative source", "source_url": "http://example.com/", "tables": { "example_table": { "description_html": "Custom <em>table</em> description", "license": "CC BY 3.0 US", "license_url": "https://creativecommons.org/licenses/by/3.0/us/" } } } } }) ]]] [[[end]]] Each of the top-level metadata fields can be used at the database and table level. | 698 | |
13 | 13 | Source, license and about | The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional. source and source_url should be used to indicate where the underlying data came from. license and license_url should be used to indicate the license under which the data can be used. about and about_url can be used to link to further information about the project - an accompanying blog entry for example. For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page. | 698 | |
14 | 14 | Column descriptions | You can include descriptions for your columns by adding a "columns": {"name-of-column": "description-of-column"} block to your table metadata: [[[cog metadata_example(cog, { "databases": { "database1": { "tables": { "example_table": { "columns": { "column1": "Description of column 1", "column2": "Description of column 2" } } } } } }) ]]] [[[end]]] These will be displayed at the top of the table page, and will also show in the cog menu for each column. You can see an example of how these look at latest.datasette.io/fixtures/roadside_attractions . | 698 | |
15 | 15 | Setting a default sort order | By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the "sort" or "sort_desc" metadata properties: [[[cog metadata_example(cog, { "databases": { "mydatabase": { "tables": { "example_table": { "sort": "created" } } } } }) ]]] [[[end]]] Or use "sort_desc" to sort in descending order: [[[cog metadata_example(cog, { "databases": { "mydatabase": { "tables": { "example_table": { "sort_desc": "created" } } } } }) ]]] [[[end]]] | 698 | |
16 | 16 | Setting a custom page size | Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the "size" key in metadata.json : [[[cog metadata_example(cog, { "databases": { "mydatabase": { "tables": { "example_table": { "size": 10 } } } } }) ]]] [[[end]]] This size can still be over-ridden by passing e.g. ?_size=50 in the query string. | 698 | |
17 | 17 | Setting which columns can be used for sorting | Datasette allows any column to be used for sorting by default. If you need to control which columns are available for sorting you can do so using the optional sortable_columns key: [[[cog metadata_example(cog, { "databases": { "database1": { "tables": { "example_table": { "sortable_columns": [ "height", "weight" ] } } } } }) ]]] [[[end]]] This will restrict sorting of example_table to just the height and weight columns. You can also disable sorting entirely by setting "sortable_columns": [] You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so: [[[cog metadata_example(cog, { "databases": { "my_database": { "tables": { "name_of_view": { "sortable_columns": [ "clicks", "impressions" ] } } } } }) ]]] [[[end]]] | 698 | |
18 | 18 | Specifying the label column for a table | Datasette's HTML interface attempts to display foreign key references as labelled hyperlinks. By default, it looks for referenced tables that only have two columns: a primary key column and one other. It assumes that the second column should be used as the link label. If your table has more than two columns you can specify which column should be used for the link label with the label_column property: [[[cog metadata_example(cog, { "databases": { "database1": { "tables": { "example_table": { "label_column": "title" } } } } }) ]]] [[[end]]] | 698 | |
19 | 19 | Hiding tables | You can hide tables from the database listing view (in the same way that FTS and SpatiaLite tables are automatically hidden) using "hidden": true : [[[cog metadata_example(cog, { "databases": { "database1": { "tables": { "example_table": { "hidden": True } } } } }) ]]] [[[end]]] | 698 | |
20 | 20 | Metadata reference | A full reference of every supported option in a metadata.json or metadata.yaml file. | 698 | |
21 | 21 | Top-level metadata | "Top-level" metadata refers to fields that can be specified at the root level of a metadata file. These attributes are meant to describe the entire Datasette instance. The following are the full list of allowed top-level metadata fields: title description description_html license license_url source source_url | 698 | |
22 | 22 | Database-level metadata | "Database-level" metadata refers to fields that can be specified for each database in a Datasette instance. These attributes should be listed under a database inside the "databases" field. The following are the full list of allowed database-level metadata fields: source source_url license license_url about about_url | 698 | |
23 | 23 | Table-level metadata | "Table-level" metadata refers to fields that can be specified for each table in a Datasette instance. These attributes should be listed under a specific table using the "tables" field. The following are the full list of allowed table-level metadata fields: source source_url license license_url about about_url hidden sort/sort_desc size sortable_columns label_column facets fts_table fts_pk searchmode columns | 698 | |
24 | 24 | Contributing | Datasette is an open source project. We welcome contributions! This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins . | 698 | |
25 | 25 | General guidelines | main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them. | 698 | |
26 | 26 | Setting up a development environment | If you have Python 3.8 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps. If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account. Now clone that repository somewhere on your computer: git clone git@github.com:YOURNAME/datasette If you want to get started without creating your own fork, you can do this instead: git clone git@github.com:simonw/datasette The next step is to create a virtual environment for your project and use it to install Datasette's dependencies: cd datasette # Create a virtual environment in ./venv python3 -m venv ./venv # Now activate the virtual environment, so pip can install into it source venv/bin/activate # Install Datasette and its testing dependencies python3 -m pip install -e '.[test]' That last line does most of the work: pip install -e means "install this package in a way that allows me to edit the source code in place". The .[test] option means "use the setup.py in this directory and install the optional testing dependencies as well". | 698 | |
27 | 27 | Running the tests | Once you have done this, you can run the Datasette unit tests from inside your datasette/ directory using pytest like so: pytest You can run the tests faster using multiple CPU cores with pytest-xdist like this: pytest -n auto -m "not serial" -n auto detects the number of available cores automatically. The -m "not serial" skips tests that don't work well in a parallel test environment. You can run those tests separately like so: pytest -m "serial" | 698 | |
28 | 28 | Using fixtures | To run Datasette itself, type datasette . You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. You can create a copy of that database by running this command: python tests/fixtures.py fixtures.db Now you can run Datasette against the new fixtures database like so: datasette fixtures.db This will start a server at http://127.0.0.1:8001/ . Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: datasette --reload fixtures.db You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: python tests/fixtures.py fixtures.db fixtures-metadata.json Or to output the plugins used by the tests, run this: python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins Test tables written to fixtures.db - metadata written to fixtures-metadata.json Wrote plugin: fixtures-plugins/register_output_renderer.py Wrote plugin: fixtures-plugins/view_name.py Wrote plugin: fixtures-plugins/my_plugin.py Wrote plugin: fixtures-plugins/messages_output_renderer.py Wrote plugin: fixtures-plugins/my_plugin_2.py Then run Datasette like this: datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/ | 698 | |
29 | 29 | Debugging | Any errors that occur while Datasette is running while display a stack trace on the console. You can tell Datasette to open an interactive pdb (or ipdb , if present) debugger session if an error occurs using the --pdb option: datasette --pdb fixtures.db For ipdb , first run this: datasette install ipdb | 698 | |
30 | 30 | Code formatting | Datasette uses opinionated code formatters: Black for Python and Prettier for JavaScript. These formatters are enforced by Datasette's continuous integration: if a commit includes Python or JavaScript code that does not match the style enforced by those tools, the tests will fail. When developing locally, you can verify and correct the formatting of your code using these tools. | 698 | |
31 | 31 | Running Black | Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: black . --check All done! ✨ 🍰 ✨ 95 files would be left unchanged. If any of your code does not conform to Black you can run this to automatically fix those problems: black . reformatted ../datasette/setup.py All done! ✨ 🍰 ✨ 1 file reformatted, 94 files left unchanged. | 698 | |
32 | 32 | blacken-docs | The blacken-docs command applies Black formatting rules to code examples in the documentation. Run it like this: blacken-docs -l 60 docs/*.rst | 698 | |
33 | 33 | Prettier | To install Prettier, install Node.js and then run the following in the root of your datasette repository checkout: npm install This will install Prettier in a node_modules directory. You can then check that your code matches the coding style like so: npm run prettier -- --check > prettier > prettier 'datasette/static/*[!.min].js' "--check" Checking formatting... [warn] datasette/static/plugins.js [warn] Code style issues found in the above file(s). Forgot to run Prettier? You can fix any problems by running: npm run fix | 698 | |
34 | 34 | Editing and building the documentation | Datasette's documentation lives in the docs/ directory and is deployed automatically using Read The Docs . The documentation is written using reStructuredText. You may find this article on The subset of reStructuredText worth committing to memory useful. You can build it locally by installing sphinx and sphinx_rtd_theme in your Datasette development environment and then running make html directly in the docs/ directory: # You may first need to activate your virtual environment: source venv/bin/activate # Install the dependencies needed to build the docs pip install -e .[docs] # Now build the docs cd docs/ make html This will create the HTML version of the documentation in docs/_build/html . You can open it in your browser like so: open _build/html/index.html Any time you make changes to a .rst file you can re-run make html to update the built documents, then refresh them in your browser. For added productivity, you can use use sphinx-autobuild to run Sphinx in auto-build mode. This will run a local webserver serving the docs that automatically rebuilds them and refreshes the page any time you hit save in your editor. sphinx-autobuild will have been installed when you ran pip install -e .[docs] . In your docs/ directory you can start the server by running the following: make livehtml Now browse to http://localhost:8000/ to view the documentation. Any edits you make should be instantly reflected in your browser. | 698 | |
35 | 35 | Running Cog | Some pages of documentation (in particular the CLI reference ) are automatically updated using Cog . To update these pages, run the following command: cog -r docs/*.rst | 698 | |
36 | 36 | Continuously deployed demo instances | The demo instance at latest.datasette.io is re-deployed automatically to Google Cloud Run for every push to main that passes the test suite. This is implemented by the GitHub Actions workflow at .github/workflows/deploy-latest.yml . Specific branches can also be set to automatically deploy by adding them to the on: push: branches block at the top of the workflow YAML file. Branches configured in this way will be deployed to a new Cloud Run service whether or not their tests pass. The Cloud Run URL for a branch demo can be found in the GitHub Actions logs. | 698 | |
37 | 37 | Release process | Datasette releases are performed using tags. When a new release is published on GitHub, a GitHub Action workflow will perform the following: Run the unit tests against all supported Python versions. If the tests pass... Build a Docker image of the release and push a tag to https://hub.docker.com/r/datasetteproject/datasette Re-point the "latest" tag on Docker Hub to the new image Build a wheel bundle of the underlying Python source code Push that new wheel up to PyPI: https://pypi.org/project/datasette/ If the release is an alpha, navigate to https://readthedocs.org/projects/datasette/versions/ and search for the tag name in the "Activate a version" filter, then mark that version as "active" to ensure it will appear on the public ReadTheDocs documentation site. To deploy new releases you will need to have push access to the main Datasette GitHub repository. Datasette follows Semantic Versioning : major.minor.patch We increment major for backwards-incompatible releases. Datasette is currently pre-1.0 so the major version is always 0 . We increment minor for new features. We increment patch for bugfix releass. Alpha and beta releases may have an additional a0 or b0 prefix - the integer component will be incremented with each subsequent alpha or beta. To release a new version, first create a commit that updates the version number in datasette/version.py and the the changelog with highlights of the new version. An example commit can be seen here : # Update changelog git commit -m " Release 0.51a1 Ref… | 698 | |
38 | 38 | Alpha and beta releases | Alpha and beta releases are published to preview upcoming features that may not yet be stable - in particular to preview new plugin hooks. You are welcome to try these out, but please be aware that details may change before the final release. Please join discussions on the issue tracker to share your thoughts and experiences with on alpha and beta features that you try out. | 698 | |
39 | 39 | Releasing bug fixes from a branch | If it's necessary to publish a bug fix release without shipping new features that have landed on main a release branch can be used. Create it from the relevant last tagged release like so: git branch 0.52.x 0.52.4 git checkout 0.52.x Next cherry-pick the commits containing the bug fixes: git cherry-pick COMMIT Write the release notes in the branch, and update the version number in version.py . Then push the branch: git push -u origin 0.52.x Once the tests have completed, publish the release from that branch target using the GitHub Draft a new release form. Finally, cherry-pick the commit with the release notes and version number bump across to main : git checkout main git cherry-pick COMMIT git push | 698 | |
40 | 40 | Upgrading CodeMirror | Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror: Install the packages with: npm i codemirror @codemirror/lang-sql Build the bundle using the version number from package.json with: node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js \ -f iife \ -n cm \ -o datasette/static/cm-editor-6.0.1.bundle.js \ -p @rollup/plugin-node-resolve \ -p @rollup/plugin-terser Update the version reference in the codemirror.html template. | 698 | |
41 | 41 | Configuration | Datasette offers several ways to configure your Datasette instances: server settings, plugin configuration, authentication, and more. Most configuration can be handled using a datasette.yaml configuration file, passed to datasette using the -c/--config flag: datasette mydatabase.db --config datasette.yaml This file can also use JSON, as datasette.json . YAML is recommended over JSON due to its support for comments and multi-line strings. | 698 | |
42 | 42 | Configuration via the command-line | The recommended way to configure Datasette is using a datasette.yaml file passed to -c/--config . You can also pass individual settings to Datasette using the -s/--setting option, which can be used multiple times: datasette mydatabase.db \ --setting settings.default_page_size 50 \ --setting settings.sql_time_limit_ms 3500 This option takes dotted-notation for the first argument and a value for the second argument. This means you can use it to set any configuration value that would be valid in a datasette.yaml file. It also works for plugin configuration, for example for datasette-cluster-map : datasette mydatabase.db \ --setting plugins.datasette-cluster-map.latitude_column xlat \ --setting plugins.datasette-cluster-map.longitude_column xlon If the value you provide is a valid JSON object or list it will be treated as nested data, allowing you to configure plugins that accept lists such as datasette-proxy-url : datasette mydatabase.db \ -s plugins.datasette-proxy-url.paths '[{"path": "/proxy", "backend": "http://example.com/"}]' This is equivalent to a datasette.yaml file containing the following: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ plugins: datasette-proxy-url: paths: - path: /proxy backend: http://example.com/ """).strip() ) ]]] [[[end]]] | 698 | |
43 | 43 | The following example shows some of the valid configuration options that can exist inside datasette.yaml . [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # Datasette settings block settings: default_page_size: 50 sql_time_limit_ms: 3500 max_returned_rows: 2000 # top-level plugin configuration plugins: datasette-my-plugin: key: valueA # Database and table-level configuration databases: your_db_name: # plugin configuration for the your_db_name database plugins: datasette-my-plugin: key: valueA tables: your_table_name: allow: # Only the root user can access this table id: root # plugin configuration for the your_table_name table # inside your_db_name database plugins: datasette-my-plugin: key: valueB """) ) ]]] [[[end]]] | 698 | ||
44 | 44 | Settings | Settings can be configured in datasette.yaml with the settings key: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # inside datasette.yaml settings: default_allow_sql: off default_page_size: 50 """).strip() ) ]]] [[[end]]] The full list of settings is available in the settings documentation . Settings can also be passed to Datasette using one or more --setting name value command line options.` | 698 | |
45 | 45 | Plugin configuration | Datasette plugins often require configuration. This plugin configuration should be placed in plugins keys inside datasette.yaml . Most plugins are configured at the top-level of the file, using the plugins key: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # inside datasette.yaml plugins: datasette-my-plugin: key: my_value """).strip() ) ]]] [[[end]]] Some plugins can be configured at the database or table level. These should use a plugins key nested under the appropriate place within the databases object: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # inside datasette.yaml databases: my_database: # plugin configuration for the my_database database plugins: datasette-my-plugin: key: my_value my_other_database: tables: my_table: # plugin configuration for the my_table table inside the my_other_database database plugins: datasette-my-plugin: key: my_value """).strip() ) ]]] [[[end]]] | 698 | |
46 | 46 | Permissions configuration | Datasette's authentication and permissions system can also be configured using datasette.yaml . Here is a simple example: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # Instance is only available to users 'sharon' and 'percy': allow: id: - sharon - percy # Only 'percy' is allowed access to the accounting database: databases: accounting: allow: id: percy """).strip() ) ]]] [[[end]]] Access permissions in datasette.yaml has the full details. | 698 | |
47 | 47 | Canned queries configuration | Canned queries are named SQL queries that appear in the Datasette interface. They can be configured in datasette.yaml using the queries key at the database level: [[[cog from metadata_doc import config_example, config_example config_example(cog, { "databases": { "sf-trees": { "queries": { "just_species": { "sql": "select qSpecies from Street_Tree_List" } } } } }) ]]] [[[end]]] See the canned queries documentation for more, including how to configure writable canned queries . | 698 | |
48 | 48 | Custom CSS and JavaScript | Datasette can load additional CSS and JavaScript files, configured in datasette.yaml like this: [[[cog from metadata_doc import config_example config_example(cog, """ extra_css_urls: - https://simonwillison.net/static/css/all.bf8cd891642c.css extra_js_urls: - https://code.jquery.com/jquery-3.2.1.slim.min.js """) ]]] [[[end]]] The extra CSS and JavaScript files will be linked in the <head> of every page: <link rel="stylesheet" href="https://simonwillison.net/static/css/all.bf8cd891642c.css"> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js"></script> You can also specify a SRI (subresource integrity hash) for these assets: [[[cog config_example(cog, """ extra_css_urls: - url: https://simonwillison.net/static/css/all.bf8cd891642c.css sri: sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI extra_js_urls: - url: https://code.jquery.com/jquery-3.2.1.slim.min.js sri: sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g= """) ]]] [[[end]]] This will produce: <link rel="stylesheet" href="https://simonwillison.net/static/css/all.bf8cd891642c.css" integrity="sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI" crossorigin="anonymous"> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=" crossorigin="anonymous"></script> Modern browsers will only execute the stylesheet or JavaScript if the SRI hash matches the content served. You can generate hashes using www.srihash.org Items in "extra_js_urls" can specify "module": true if they reference JavaScript that uses JavaScript modules . This configuration: [[[cog config_example(cog, """ extra_js_urls: - url: http… | 698 | |
49 | 49 | Authentication and permissions | Datasette doesn't require authentication by default. Any visitor to a Datasette instance can explore the full data and execute read-only SQL queries. Datasette's plugin system can be used to add many different styles of authentication, such as user accounts, single sign-on or API keys. | 698 | |
50 | 50 | Actors | Through plugins, Datasette can support both authenticated users (with cookies) and authenticated API agents (via authentication tokens). The word "actor" is used to cover both of these cases. Every request to Datasette has an associated actor value, available in the code as request.actor . This can be None for unauthenticated requests, or a JSON compatible Python dictionary for authenticated users or API agents. The actor dictionary can be any shape - the design of that data structure is left up to the plugins. A useful convention is to include an "id" string, as demonstrated by the "root" actor below. Plugins can use the actor_from_request(datasette, request) hook to implement custom logic for authenticating an actor based on the incoming HTTP request. | 698 | |
51 | 51 | Using the "root" actor | Datasette currently leaves almost all forms of authentication to plugins - datasette-auth-github for example. The one exception is the "root" account, which you can sign into while using Datasette on your local machine. This provides access to a small number of debugging features. To sign in as root, start Datasette using the --root command-line option, like this: datasette --root http://127.0.0.1:8001/-/auth-token?token=786fc524e0199d70dc9a581d851f466244e114ca92f33aa3b42a139e9388daa7 INFO: Started server process [25801] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) The URL on the first line includes a one-use token which can be used to sign in as the "root" actor in your browser. Click on that link and then visit http://127.0.0.1:8001/-/actor to confirm that you are authenticated as an actor that looks like this: { "id": "root" } | 698 | |
52 | 52 | Permissions | Datasette has an extensive permissions system built-in, which can be further extended and customized by plugins. The key question the permissions system answers is this: Is this actor allowed to perform this action , optionally against this particular resource ? Actors are described above . An action is a string describing the action the actor would like to perform. A full list is provided below - examples include view-table and execute-sql . A resource is the item the actor wishes to interact with - for example a specific database or table. Some actions, such as permissions-debug , are not associated with a particular resource. Datasette's built-in view permissions ( view-database , view-table etc) default to allow - unless you configure additional permission rules unauthenticated users will be allowed to access content. Permissions with potentially harmful effects should default to deny . Plugin authors should account for this when designing new plugins - for example, the datasette-upload-csvs plugin defaults to deny so that installations don't accidentally allow unauthenticated users to create new tables by uploading a CSV file. | 698 | |
53 | 53 | How permissions are resolved | The datasette.permission_allowed(actor, action, resource=None, default=...) method is called to check if an actor is allowed to perform a specific action. This method asks every plugin that implements the permission_allowed(datasette, actor, action, resource) hook if the actor is allowed to perform the action. Each plugin can return True to indicate that the actor is allowed to perform the action, False if they are not allowed and None if the plugin has no opinion on the matter. False acts as a veto - if any plugin returns False then the permission check is denied. Otherwise, if any plugin returns True then the permission check is allowed. The resource argument can be used to specify a specific resource that the action is being performed against. Some permissions, such as view-instance , do not involve a resource. Others such as view-database have a resource that is a string naming the database. Permissions that take both a database name and the name of a table, view or canned query within that database use a resource that is a tuple of two strings, (database_name, resource_name) . Plugins that implement the permission_allowed() hook can decide if they are going to consider the provided resource or not. | 698 | |
54 | 54 | Defining permissions with "allow" blocks | The standard way to define permissions in Datasette is to use an "allow" block in the datasette.yaml file . This is a JSON document describing which actors are allowed to perform a permission. The most basic form of allow block is this ( allow demo , deny demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: id: root """).strip(), "YAML", "JSON" ) ]]] [[[end]]] This will match any actors with an "id" property of "root" - for example, an actor that looks like this: { "id": "root", "name": "Root User" } An allow block can specify "deny all" using false ( demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: false """).strip(), "YAML", "JSON" ) ]]] [[[end]]] An "allow" of true allows all access ( demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: true """).strip(), "YAML", "JSON" ) ]]] [[[end]]] Allow keys can provide a list of values. These will match any actor that has any of those values ( allow demo , deny demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: id: - simon - cleopaws """).strip(), "YAML", "JSON" ) ]]] [[[end]]] This will match any actor with an "id" of either "simon" or "cleopaws" . Actors can have properties that feature a list of values. These will be matched against the list of values in an allow block. Consider the following actor: { "id": "simon"… | 698 | |
55 | 55 | The /-/allow-debug tool | The /-/allow-debug tool lets you try out different "action" blocks against different "actor" JSON objects. You can try that out here: https://latest.datasette.io/-/allow-debug | 698 | |
56 | 56 | Access permissions in | There are two ways to configure permissions using datasette.yaml (or datasette.json ). For simple visibility permissions you can use "allow" blocks in the root, database, table and query sections. For other permissions you can use a "permissions" block, described in the next section . You can limit who is allowed to view different parts of your Datasette instance using "allow" keys in your Configuration . You can control the following: Access to the entire Datasette instance Access to specific databases Access to specific tables and views Access to specific Canned queries If a user cannot access a specific database, they will not be able to access tables, views or queries within that database. If a user cannot access the instance they will not be able to access any of the databases, tables, views or queries. | 698 | |
57 | 57 | Access to an instance | Here's how to restrict access to your entire Datasette instance to just the "id": "root" user: [[[cog from metadata_doc import config_example config_example(cog, """ title: My private Datasette instance allow: id: root """) ]]] [[[end]]] To deny access to all users, you can use "allow": false : [[[cog config_example(cog, """ title: My entirely inaccessible instance allow: false """) ]]] [[[end]]] One reason to do this is if you are using a Datasette plugin - such as datasette-permissions-sql - to control permissions instead. | 698 | |
58 | 58 | Access to specific databases | To limit access to a specific private.db database to just authenticated users, use the "allow" block like this: [[[cog config_example(cog, """ databases: private: allow: id: "*" """) ]]] [[[end]]] | 698 | |
59 | 59 | Access to specific tables and views | To limit access to the users table in your bakery.db database: [[[cog config_example(cog, """ databases: bakery: tables: users: allow: id: '*' """) ]]] [[[end]]] This works for SQL views as well - you can list their names in the "tables" block above in the same way as regular tables. Restricting access to tables and views in this way will NOT prevent users from querying them using arbitrary SQL queries, like this for example. If you are restricting access to specific tables you should also use the "allow_sql" block to prevent users from bypassing the limit with their own SQL queries - see Controlling the ability to execute arbitrary SQL . | 698 | |
60 | 60 | Access to specific canned queries | Canned queries allow you to configure named SQL queries in your datasette.yaml that can be executed by users. These queries can be set up to both read and write to the database, so controlling who can execute them can be important. To limit access to the add_name canned query in your dogs.db database to just the root user : [[[cog config_example(cog, """ databases: dogs: queries: add_name: sql: INSERT INTO names (name) VALUES (:name) write: true allow: id: - root """) ]]] [[[end]]] | 698 | |
61 | 61 | Controlling the ability to execute arbitrary SQL | Datasette defaults to allowing any site visitor to execute their own custom SQL queries, for example using the form on the database page or by appending a ?_where= parameter to the table page like this . Access to this ability is controlled by the execute-sql permission. The easiest way to disable arbitrary SQL queries is using the default_allow_sql setting when you first start Datasette running. You can alternatively use an "allow_sql" block to control who is allowed to execute arbitrary SQL queries. To prevent any user from executing arbitrary SQL queries, use this: [[[cog config_example(cog, """ allow_sql: false """) ]]] [[[end]]] To enable just the root user to execute SQL for all databases in your instance, use the following: [[[cog config_example(cog, """ allow_sql: id: root """) ]]] [[[end]]] To limit this ability for just one specific database, use this: [[[cog config_example(cog, """ databases: mydatabase: allow_sql: id: root """) ]]] [[[end]]] | 698 | |
62 | 62 | Other permissions in | For all other permissions, you can use one or more "permissions" blocks in your datasette.yaml configuration file. To grant access to the permissions debug tool to all signed in users, you can grant permissions-debug to any actor with an id matching the wildcard * by adding this a the root of your configuration: [[[cog config_example(cog, """ permissions: debug-menu: id: '*' """) ]]] [[[end]]] To grant create-table to the user with id of editor for the docs database: [[[cog config_example(cog, """ databases: docs: permissions: create-table: id: editor """) ]]] [[[end]]] And for insert-row against the reports table in that docs database: [[[cog config_example(cog, """ databases: docs: tables: reports: permissions: insert-row: id: editor """) ]]] [[[end]]] The permissions debug tool can be useful for helping test permissions that you have configured in this way. | 698 | |
63 | 63 | API Tokens | Datasette includes a default mechanism for generating API tokens that can be used to authenticate requests. Authenticated users can create new API tokens using a form on the /-/create-token page. Tokens created in this way can be further restricted to only allow access to specific actions, or to limit those actions to specific databases, tables or queries. Created tokens can then be passed in the Authorization: Bearer $token header of HTTP requests to Datasette. A token created by a user will include that user's "id" in the token payload, so any permissions granted to that user based on their ID can be made available to the token as well. When one of these a token accompanies a request, the actor for that request will have the following shape: { "id": "user_id", "token": "dstok", "token_expires": 1667717426 } The "id" field duplicates the ID of the actor who first created the token. The "token" field identifies that this actor was authenticated using a Datasette signed token ( dstok ). The "token_expires" field, if present, indicates that the token will expire after that integer timestamp. The /-/create-token page cannot be accessed by actors that are authenticated with a "token": "some-value" property. This is to prevent API tokens from being used to create more tokens. Datasette plugins that implement their own form of API token authentication should follow this convention. You can disable the signed token feature entirely using the allow_signed_tokens setting. | 698 | |
64 | 64 | datasette create-token | You can also create tokens on the command line using the datasette create-token command. This command takes one required argument - the ID of the actor to be associated with the created token. You can specify a -e/--expires-after option in seconds. If omitted, the token will never expire. The command will sign the token using the DATASETTE_SECRET environment variable, if available. You can also pass the secret using the --secret option. This means you can run the command locally to create tokens for use with a deployed Datasette instance, provided you know that instance's secret. To create a token for the root actor that will expire in one hour: datasette create-token root --expires-after 3600 To create a token that never expires using a specific secret: datasette create-token root --secret my-secret-goes-here | 698 | |
65 | 65 | Restricting the actions that a token can perform | Tokens created using datasette create-token ACTOR_ID will inherit all of the permissions of the actor that they are associated with. You can pass additional options to create tokens that are restricted to a subset of that actor's permissions. To restrict the token to just specific permissions against all available databases, use the --all option: datasette create-token root --all insert-row --all update-row This option can be passed as many times as you like. In the above example the token will only be allowed to insert and update rows. You can also restrict permissions such that they can only be used within specific databases: datasette create-token root --database mydatabase insert-row The resulting token will only be able to insert rows, and only to tables in the mydatabase database. Finally, you can restrict permissions to individual resources - tables, SQL views and named queries - within a specific database: datasette create-token root --resource mydatabase mytable insert-row These options have short versions: -a for --all , -d for --database and -r for --resource . You can add --debug to see a JSON representation of the token that has been created. Here's a full example: datasette create-token root \ --secret mysecret \ --all view-instance \ --all view-table \ --database docs view-query \ --resource docs documents insert-row \ --resource docs documents update-row \ --debug This example outputs the following: dstok_.eJxFizEKgDAMRe_y5w4qYrFXERGxDkVsMI0uxbubdjFL8l_ez1jhwEQCA6Fjjxp90qtkuHawzdjYrh8MFobLxZ_wBH0_gtnAF-hpS5VfmF8D_lnd97lHqUJgLd6sls4H1qwlhA.nH_7RecYHj5qSzvjhMU95iy0Xlc Decoded: { "a": "root", "token": "dstok", "t": 1670907246, "_r": { "a… | 698 | |
66 | 66 | Checking permissions in plugins | Datasette plugins can check if an actor has permission to perform an action using the datasette.permission_allowed(...) method. Datasette core performs a number of permission checks, documented below . Plugins can implement the permission_allowed(datasette, actor, action, resource) plugin hook to participate in decisions about whether an actor should be able to perform a specified action. | 698 | |
67 | 67 | actor_matches_allow() | Plugins that wish to implement this same "allow" block permissions scheme can take advantage of the datasette.utils.actor_matches_allow(actor, allow) function: from datasette.utils import actor_matches_allow actor_matches_allow({"id": "root"}, {"id": "*"}) # returns True The currently authenticated actor is made available to plugins as request.actor . | 698 | |
68 | 68 | The permissions debug tool | The debug tool at /-/permissions is only available to the authenticated root user (or any actor granted the permissions-debug action). It shows the thirty most recent permission checks that have been carried out by the Datasette instance. It also provides an interface for running hypothetical permission checks against a hypothetical actor. This is a useful way of confirming that your configured permissions work in the way you expect. This is designed to help administrators and plugin authors understand exactly how permission checks are being carried out, in order to effectively configure Datasette's permission system. | 698 | |
69 | 69 | The ds_actor cookie | Datasette includes a default authentication plugin which looks for a signed ds_actor cookie containing a JSON actor dictionary. This is how the root actor mechanism works. Authentication plugins can set signed ds_actor cookies themselves like so: response = Response.redirect("/") datasette.set_actor_cookie(response, {"id": "cleopaws"}) The shape of data encoded in the cookie is as follows: { "a": { "id": "cleopaws" } } To implement logout in a plugin, use the delete_actor_cookie() method: response = Response.redirect("/") datasette.delete_actor_cookie(response) | 698 | |
70 | 70 | Including an expiry time | ds_actor cookies can optionally include a signed expiry timestamp, after which the cookies will no longer be valid. Authentication plugins may chose to use this mechanism to limit the lifetime of the cookie. For example, if a plugin implements single-sign-on against another source it may decide to set short-lived cookies so that if the user is removed from the SSO system their existing Datasette cookies will stop working shortly afterwards. To include an expiry pass expire_after= to datasette.set_actor_cookie() with a number of seconds. For example, to expire in 24 hours: response = Response.redirect("/") datasette.set_actor_cookie( response, {"id": "cleopaws"}, expire_after=60 * 60 * 24 ) The resulting cookie will encode data that looks something like this: { "a": { "id": "cleopaws" }, "e": "1jjSji" } | 698 | |
71 | 71 | The /-/logout page | The page at /-/logout provides the ability to log out of a ds_actor cookie authentication session. | 698 | |
72 | 72 | Built-in permissions | This section lists all of the permission checks that are carried out by Datasette core, along with the resource if it was passed. | 698 | |
73 | 73 | view-instance | Top level permission - Actor is allowed to view any pages within this instance, starting at https://latest.datasette.io/ Default allow . | 698 | |
74 | 74 | view-database | Actor is allowed to view a database page, e.g. https://latest.datasette.io/fixtures resource - string The name of the database Default allow . | 698 | |
75 | 75 | view-database-download | Actor is allowed to download a database, e.g. https://latest.datasette.io/fixtures.db resource - string The name of the database Default allow . | 698 | |
76 | 76 | view-table | Actor is allowed to view a table (or view) page, e.g. https://latest.datasette.io/fixtures/complex_foreign_keys resource - tuple: (string, string) The name of the database, then the name of the table Default allow . | 698 | |
77 | 77 | view-query | Actor is allowed to view (and execute) a canned query page, e.g. https://latest.datasette.io/fixtures/pragma_cache_size - this includes executing Writable canned queries . resource - tuple: (string, string) The name of the database, then the name of the canned query Default allow . | 698 | |
78 | 78 | insert-row | Actor is allowed to insert rows into a table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny . | 698 | |
79 | 79 | delete-row | Actor is allowed to delete rows from a table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny . | 698 | |
80 | 80 | update-row | Actor is allowed to update rows in a table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny . | 698 | |
81 | 81 | create-table | Actor is allowed to create a database table. resource - string The name of the database Default deny . | 698 | |
82 | 82 | alter-table | Actor is allowed to alter a database table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny . | 698 | |
83 | 83 | drop-table | Actor is allowed to drop a database table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny . | 698 | |
84 | 84 | execute-sql | Actor is allowed to run arbitrary SQL queries against a specific database, e.g. https://latest.datasette.io/fixtures?sql=select+100 resource - string The name of the database Default allow . See also the default_allow_sql setting . | 698 | |
85 | 85 | permissions-debug | Actor is allowed to view the /-/permissions debug page. Default deny . | 698 | |
86 | 86 | debug-menu | Controls if the various debug pages are displayed in the navigation menu. Default deny . | 698 | |
87 | 87 | Writing plugins | You can write one-off plugins that apply to just one Datasette instance, or you can write plugins which can be installed using pip and can be shipped to the Python Package Index ( PyPI ) for other people to install. Want to start by looking at an example? The Datasette plugins directory lists more than 90 open source plugins with code you can explore. The plugin hooks page includes links to example plugins for each of the documented hooks. | 698 | |
88 | 88 | Tracing plugin hooks | The DATASETTE_TRACE_PLUGINS environment variable turns on detailed tracing showing exactly which hooks are being run. This can be useful for understanding how Datasette is using your plugin. DATASETTE_TRACE_PLUGINS=1 datasette mydb.db Example output: actor_from_request: { 'datasette': <datasette.app.Datasette object at 0x100bc7220>, 'request': <asgi.Request method="GET" url="http://127.0.0.1:4433/">} Hook implementations: [ <HookImpl plugin_name='codespaces', plugin=<module 'datasette_codespaces' from '.../site-packages/datasette_codespaces/__init__.py'>>, <HookImpl plugin_name='datasette.actor_auth_cookie', plugin=<module 'datasette.actor_auth_cookie' from '.../datasette/datasette/actor_auth_cookie.py'>>, <HookImpl plugin_name='datasette.default_permissions', plugin=<module 'datasette.default_permissions' from '.../datasette/default_permissions.py'>>] Results: [{'id': 'root'}] | 698 | |
89 | 89 | Writing one-off plugins | The quickest way to start writing a plugin is to create a my_plugin.py file and drop it into your plugins/ directory. Here is an example plugin, which adds a new custom SQL function called hello_world() which takes no arguments and returns the string Hello world! . from datasette import hookimpl @hookimpl def prepare_connection(conn): conn.create_function( "hello_world", 0, lambda: "Hello world!" ) If you save this in plugins/my_plugin.py you can then start Datasette like this: datasette serve mydb.db --plugins-dir=plugins/ Now you can navigate to http://localhost:8001/mydb and run this SQL: select hello_world(); To see the output of your plugin. | 698 | |
90 | 90 | Starting an installable plugin using cookiecutter | Plugins that can be installed should be written as Python packages using a setup.py file. The quickest way to start writing one an installable plugin is to use the datasette-plugin cookiecutter template. This creates a new plugin structure for you complete with an example test and GitHub Actions workflows for testing and publishing your plugin. Install cookiecutter and then run this command to start building a plugin using the template: cookiecutter gh:simonw/datasette-plugin Read a cookiecutter template for writing Datasette plugins for more information about this template. | 698 | |
91 | 91 | Packaging a plugin | Plugins can be packaged using Python setuptools. You can see an example of a packaged plugin at https://github.com/simonw/datasette-plugin-demos The example consists of two files: a setup.py file that defines the plugin: from setuptools import setup VERSION = "0.1" setup( name="datasette-plugin-demos", description="Examples of plugins for Datasette", author="Simon Willison", url="https://github.com/simonw/datasette-plugin-demos", license="Apache License, Version 2.0", version=VERSION, py_modules=["datasette_plugin_demos"], entry_points={ "datasette": [ "plugin_demos = datasette_plugin_demos" ] }, install_requires=["datasette"], ) And a Python module file, datasette_plugin_demos.py , that implements the plugin: from datasette import hookimpl import random @hookimpl def prepare_jinja2_environment(env): env.filters["uppercase"] = lambda u: u.upper() @hookimpl def prepare_connection(conn): conn.create_function( "random_integer", 2, random.randint ) Having built a plugin in this way you can turn it into an installable package using the following command: python3 setup.py sdist This will create a .tar.gz file in the dist/ directory. You can then install your new plugin into a Datasette virtual environment or Docker container using pip : pip install datasette-plugin-demos-0.1.tar.gz To learn how to upload your plugin to PyPI for use by other people, read the PyPA guide to Packaging and distributing projects . | 698 | |
92 | 92 | Static assets | If your plugin has a static/ directory, Datasette will automatically configure itself to serve those static assets from the following path: /-/static-plugins/NAME_OF_PLUGIN_PACKAGE/yourfile.js Use the datasette.urls.static_plugins(plugin_name, path) method to generate URLs to that asset that take the base_url setting into account, see datasette.urls . To bundle the static assets for a plugin in the package that you publish to PyPI, add the following to the plugin's setup.py : package_data = ( { "datasette_plugin_name": [ "static/plugin.js", ], }, ) Where datasette_plugin_name is the name of the plugin package (note that it uses underscores, not hyphens) and static/plugin.js is the path within that package to the static file. datasette-cluster-map is a useful example of a plugin that includes packaged static assets in this way. See Writing custom CSS for tips on writing CSS that is compatible with Datasette's default CSS, including details of the core class for applying Datasette's default form element styles. | 698 | |
93 | 93 | Custom templates | If your plugin has a templates/ directory, Datasette will attempt to load templates from that directory before it uses its own default templates. The priority order for template loading is: templates from the --template-dir argument, if specified templates from the templates/ directory in any installed plugins default templates that ship with Datasette See Custom pages and templates for more details on how to write custom templates, including which filenames to use to customize which parts of the Datasette UI. Templates should be bundled for distribution using the same package_data mechanism in setup.py described for static assets above, for example: package_data = ( { "datasette_plugin_name": [ "templates/my_template.html", ], }, ) You can also use wildcards here such as templates/*.html . See datasette-edit-schema for an example of this pattern. | 698 | |
94 | 94 | Writing plugins that accept configuration | When you are writing plugins, you can access plugin configuration like this using the datasette plugin_config() method. If you know you need plugin configuration for a specific table, you can access it like this: plugin_config = datasette.plugin_config( "datasette-cluster-map", database="sf-trees", table="Street_Tree_List" ) This will return the {"latitude_column": "lat", "longitude_column": "lng"} in the above example. If there is no configuration for that plugin, the method will return None . If it cannot find the requested configuration at the table layer, it will fall back to the database layer and then the root layer. For example, a user may have set the plugin configuration option inside datasette.yaml like so: [[[cog from metadata_doc import metadata_example metadata_example(cog, { "databases": { "sf-trees": { "plugins": { "datasette-cluster-map": { "latitude_column": "xlat", "longitude_column": "xlng" } } } } }) ]]] [[[end]]] In this case, the above code would return that configuration for ANY table within the sf-trees database. The plugin configuration could also be set at the top level of datasette.yaml : [[[cog metadata_example(cog, { "plugins": { "datasette-cluster-map": { "latitude_column": "xlat", "longitude_column": "xlng" } } }) ]]] [[[end]]] Now that datasette-cluster-map plugin configuration will apply to every table in every database. | 698 | |
95 | 95 | Designing URLs for your plugin | You can register new URL routes within Datasette using the register_routes(datasette) plugin hook. Datasette's default URLs include these: /dbname - database page /dbname/tablename - table page /dbname/tablename/pk - row page See Pages and API endpoints and Introspection for more default URL routes. To avoid accidentally conflicting with a database file that may be loaded into Datasette, plugins should register URLs using a /-/ prefix. For example, if your plugin adds a new interface for uploading Excel files you might register a URL route like this one: /-/upload-excel Try to avoid registering URLs that clash with other plugins that your users might have installed. There is no central repository of reserved URL paths (yet) but you can review existing plugins by browsing the plugins directory . If your plugin includes functionality that relates to a specific database you could also register a URL route like this: /dbname/-/upload-excel Or for a specific table like this: /dbname/tablename/-/modify-table-schema Note that a row could have a primary key of - and this URL scheme will still work, because Datasette row pages do not ever have a trailing slash followed by additional path components. | 698 | |
96 | 96 | Building URLs within plugins | Plugins that define their own custom user interface elements may need to link to other pages within Datasette. This can be a bit tricky if the Datasette instance is using the base_url configuration setting to run behind a proxy, since that can cause Datasette's URLs to include an additional prefix. The datasette.urls object provides internal methods for correctly generating URLs to different pages within Datasette, taking any base_url configuration into account. This object is exposed in templates as the urls variable, which can be used like this: Back to the <a href="{{ urls.instance() }}">Homepage</a> See datasette.urls for full details on this object. | 698 | |
97 | 97 | Plugins that define new plugin hooks | Plugins can define new plugin hooks that other plugins can use to further extend their functionality. datasette-graphql is one example of a plugin that does this. It defines a new hook called graphql_extra_fields , described here , which other plugins can use to define additional fields that should be included in the GraphQL schema. To define additional hooks, add a file to the plugin called datasette_your_plugin/hookspecs.py with content that looks like this: from pluggy import HookspecMarker hookspec = HookspecMarker("datasette") @hookspec def name_of_your_hook_goes_here(datasette): "Description of your hook." You should define your own hook name and arguments here, following the documentation for Pluggy specifications . Make sure to pick a name that is unlikely to clash with hooks provided by any other plugins. Then, to register your plugin hooks, add the following code to your datasette_your_plugin/__init__.py file: from datasette.plugins import pm from . import hookspecs pm.add_hookspecs(hookspecs) This will register your plugin hooks as part of the datasette plugin hook namespace. Within your plugin code you can trigger the hook using this pattern: from datasette.plugins import pm for ( plugin_return_value ) in pm.hook.name_of_your_hook_goes_here( datasette=datasette ): # Do something with plugin_return_value pass Other plugins will then be able to register their own implementations of your hook using this syntax: from datasette import hookimpl @hookimpl def name_of_your_hook_goes_here(datasette): return "Response from this plugin hook" These plugin implementations can accept 0 or more of the named arguments that you defined in your hook specification. | 698 | |
98 | 98 | CLI reference | The datasette CLI tool provides a number of commands. Running datasette without specifying a command runs the default command, datasette serve . See datasette serve for the full list of options for that command. [[[cog from datasette import cli from click.testing import CliRunner import textwrap def help(args): title = "datasette " + " ".join(args) cog.out("\n::\n\n") result = CliRunner().invoke(cli.cli, args) output = result.output.replace("Usage: cli ", "Usage: datasette ") cog.out(textwrap.indent(output, ' ')) cog.out("\n\n") ]]] [[[end]]] | 698 | |
99 | 99 | datasette --help | Running datasette --help shows a list of all of the available commands. [[[cog help(["--help"]) ]]] Usage: datasette [OPTIONS] COMMAND [ARGS]... Datasette is an open source multi-tool for exploring and publishing data About Datasette: https://datasette.io/ Full documentation: https://docs.datasette.io/ Options: --version Show the version and exit. --help Show this message and exit. Commands: serve* Serve up specified SQLite database files with a web UI create-token Create a signed API token for the specified actor ID inspect Generate JSON summary of provided database files install Install plugins and packages from PyPI into the same... package Package SQLite files into a Datasette Docker container plugins List currently installed plugins publish Publish specified SQLite database files to the internet... uninstall Uninstall plugins and Python packages from the Datasette... [[[end]]] Additional commands added by plugins that use the register_commands(cli) hook will be listed here as well. | 698 | |
100 | 100 | datasette serve | This command starts the Datasette web application running on your machine: datasette serve mydatabase.db Or since this is the default command you can run this instead: datasette mydatabase.db Once started you can access it at http://localhost:8001 [[[cog help(["serve", "--help"]) ]]] Usage: datasette serve [OPTIONS] [FILES]... Serve up specified SQLite database files with a web UI Options: -i, --immutable PATH Database files to open in immutable mode -h, --host TEXT Host for server. Defaults to 127.0.0.1 which means only connections from the local machine will be allowed. Use 0.0.0.0 to listen to all IPs and allow access from other machines. -p, --port INTEGER RANGE Port for server, defaults to 8001. Use -p 0 to automatically assign an available port. [0<=x<=65535] --uds TEXT Bind to a Unix domain socket --reload Automatically reload if code or metadata change detected - useful for development --cors Enable CORS by serving Access-Control-Allow- Origin: * --load-extension PATH:ENTRYPOINT? Path to a SQLite extension to load, and optional entrypoint --inspect-file TEXT Path to JSON file created using "datasette inspect" -m, --metadata FILENAME Path to JSON/YAML file containing license/source metadata --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files fr… | 698 |
Advanced export
JSON shape: default, array, newline-delimited
CREATE VIRTUAL TABLE [sections_fts] USING FTS5 ( [title], [content], tokenize='porter', content=[sections] );