sections
582 rows sorted by references
This data as json, CSV (advanced)
id | page | ref | title | content | breadcrumbs | references ▼ |
---|---|---|---|---|---|---|
contributing:contributing-formatting-black | contributing | contributing-formatting-black | Running Black | Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: black . --check All done! ✨ 🍰 ✨ 95 files would be left unchanged. If any of your code does not conform to Black you can run this to automatically fix those problems: black . reformatted ../datasette/setup.py All done! ✨ 🍰 ✨ 1 file reformatted, 94 files left unchanged. | ["Contributing", "Code formatting"] | [] |
contributing:contributing-using-fixtures | contributing | contributing-using-fixtures | Using fixtures | To run Datasette itself, type datasette . You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. You can create a copy of that database by running this command: python tests/fixtures.py fixtures.db Now you can run Datasette against the new fixtures database like so: datasette fixtures.db This will start a server at http://127.0.0.1:8001/ . Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: datasette --reload fixtures.db You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: python tests/fixtures.py fixtures.db fixtures-metadata.json Or to output the plugins used by the tests, run this: python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins Test tables written to fixtures.db - metadata written to fixtures-metadata.json Wrote plugin: fixtures-plugins/register_output_renderer.py Wrote plugin: fixtures-plugins/view_name.py Wrote plugin: fixtures-plugins/my_plugin.py Wrote plugin: fixtures-plugins/messages_output_renderer.py Wrote plugin: fixtures-plugins/my_plugin_2.py Then run Datasette like this: datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/ | ["Contributing"] | [] |
contributing:general-guidelines | contributing | general-guidelines | General guidelines | main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them. | ["Contributing"] | [] |
contributing:id1 | contributing | id1 | Contributing | Datasette is an open source project. We welcome contributions! This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins . | [] | [] |
csv_export:csv-export-url-parameters | csv_export | csv-export-url-parameters | URL parameters | The following options can be used to customize the CSVs returned by Datasette. ?_header=off This removes the first row of the CSV file specifying the headings - only the row data will be returned. ?_stream=on Stream all matching records, not just the first page of results. See below. ?_dl=on Causes Datasette to return a content-disposition: attachment; filename="filename.csv" header. | ["CSV export"] | [] |
csv_export:streaming-all-records | csv_export | streaming-all-records | Streaming all records | The stream all rows option is designed to be as efficient as possible - under the hood it takes advantage of Python 3 asyncio capabilities and Datasette's efficient pagination to stream back the full CSV file. Since databases can get pretty large, by default this option is capped at 100MB - if a table returns more than 100MB of data the last line of the CSV will be a truncation error message. You can increase or remove this limit using the max_csv_mb config setting. You can also disable the CSV export feature entirely using allow_csv_stream . | ["CSV export"] | [] |
custom_templates:css-classes-on-the-body | custom_templates | css-classes-on-the-body | CSS classes on the <body> | Every default template includes CSS classes in the body designed to support custom styling. The index template (the top level page at / ) gets this: <body class="index"> The database template ( /dbname ) gets this: <body class="db db-dbname"> The custom SQL template ( /dbname?sql=... ) gets this: <body class="query db-dbname"> A canned query template ( /dbname/queryname ) gets this: <body class="query db-dbname query-queryname"> The table template ( /dbname/tablename ) gets: <body class="table db-dbname table-tablename"> The row template ( /dbname/tablename/rowid ) gets: <body class="row db-dbname table-tablename"> The db-x and table-x classes use the database or table names themselves if they are valid CSS identifiers. If they aren't, we strip any invalid characters out and append a 6 character md5 digest of the original name, in order to ensure that multiple tables which resolve to the same stripped character version still have different CSS classes. Some examples: "simple" => "simple" "MixedCase" => "MixedCase" "-no-leading-hyphens" => "no-leading-hyphens-65bea6" "_no-leading-underscores" => "no-leading-underscores-b921bc" "no spaces" => "no-spaces-7088d7" "-" => "336d5e" "no $ characters" => "no--characters-59e024" <td> and <th> elements also get custom CSS classes reflecting the database column they are representing, for example: <table> <thead> <tr> <th class="col-id" scope="col">id</th> <th class="col-name" scope="col">name</th> </tr> </thead> <tbody> <tr> <td class="col-id"><a href="...">1</a></td> <td class="col-name">SMITH</td> </tr> </tbody> </table> | ["Custom pages and templates"] | [] |
custom_templates:custom-pages-404 | custom_templates | custom-pages-404 | Returning 404s | To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function: {% if not rows %} {{ raise_404("Content not found") }} {% endif %} If you call raise_404() the other content in your template will be ignored. | ["Custom pages and templates"] | [] |
custom_templates:custom-pages-headers | custom_templates | custom-pages-headers | Custom headers and status codes | Custom pages default to being served with a content-type of text/html; charset=utf-8 and a 200 status code. You can change these by calling a custom function from within your template. For example, to serve a custom page with a 418 I'm a teapot HTTP status code, create a file in pages/teapot.html containing the following: {{ custom_status(418) }} <html> <head><title>Teapot</title></head> <body> I'm a teapot </body> </html> To serve a custom HTTP header, add a custom_header(name, value) function call. For example: {{ custom_status(418) }} {{ custom_header("x-teapot", "I am") }} <html> <head><title>Teapot</title></head> <body> I'm a teapot </body> </html> You can verify this is working using curl like this: curl -I 'http://127.0.0.1:8001/teapot' HTTP/1.1 418 date: Sun, 26 Apr 2020 18:38:30 GMT server: uvicorn x-teapot: I am content-type: text/html; charset=utf-8 | ["Custom pages and templates"] | [] |
custom_templates:custom-pages-redirects | custom_templates | custom-pages-redirects | Custom redirects | You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html : {{ custom_redirect("https://github.com/simonw/datasette") }} Now requests to http://localhost:8001/datasette will result in a redirect. These redirects are served with a 302 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function: {{ custom_redirect("https://github.com/simonw/datasette", 301) }} | ["Custom pages and templates"] | [] |
custom_templates:customization | custom_templates | customization | Custom pages and templates | Datasette provides a number of ways of customizing the way data is displayed. | [] | [] |
custom_templates:customization-static-files | custom_templates | customization-static-files | Serving static files | Datasette can serve static files for you, using the --static option. Consider the following directory structure: metadata.json static-files/styles.css static-files/app.js You can start Datasette using --static assets:static-files/ to serve those files from the /assets/ mount point: datasette --config datasette.yaml --static assets:static-files/ --memory The following URLs will now serve the content from those CSS and JS files: http://localhost:8001/assets/styles.css http://localhost:8001/assets/app.js You can reference those files from datasette.yaml like this, see custom CSS and JavaScript for more details: [[[cog from metadata_doc import config_example config_example(cog, """ extra_css_urls: - /assets/styles.css extra_js_urls: - /assets/app.js """) ]]] [[[end]]] | ["Custom pages and templates"] | [] |
custom_templates:id1 | custom_templates | id1 | Custom pages | You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory. For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this: datasette mydb.db --template-dir=templates/ You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html . | ["Custom pages and templates", "Publishing static assets"] | [] |
custom_templates:publishing-static-assets | custom_templates | publishing-static-assets | Publishing static assets | The datasette publish command can be used to publish your static assets, using the same syntax as above: datasette publish cloudrun mydb.db --static assets:static-files/ This will upload the contents of the static-files/ directory as part of the deployment, and configure Datasette to correctly serve the assets from /assets/ . | ["Custom pages and templates"] | [] |
deploying:deploying | deploying | deploying | Deploying Datasette | The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. You can deploy Datasette to other hosting providers using the instructions on this page. | [] | [] |
deploying:deploying-fundamentals | deploying | deploying-fundamentals | Deployment fundamentals | Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000. If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette. | ["Deploying Datasette"] | [] |
deploying:deploying-proxy | deploying | deploying-proxy | Running Datasette behind a proxy | You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: datasette my-database.db --setting base_url /my-datasette/ -p 8009 This will run Datasette with the following URLs: http://127.0.0.1:8009/my-datasette/ - the Datasette homepage http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance. | ["Deploying Datasette"] | [] |
deploying:deploying-systemd | deploying | deploying-systemd | Running Datasette using systemd | You can run Datasette on Ubuntu or Debian systems using systemd . First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . You can copy a test database into that folder like so: cd /home/ubuntu/datasette-root curl -O https://latest.datasette.io/fixtures.db Create a file at /etc/systemd/system/datasette.service with the following contents: [Unit] Description=Datasette After=network.target [Service] Type=simple User=ubuntu Environment=DATASETTE_SECRET= WorkingDirectory=/home/ubuntu/datasette-root ExecStart=datasette serve . -h 127.0.0.1 -p 8000 Restart=on-failure [Install] WantedBy=multi-user.target Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: python3 -c 'import secrets; print(secrets.token_hex(32))' This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. You can start the Datasette process running using the following: sudo systemctl daemon-reload sudo systemctl start datasette.service You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: sudo systemctl restart datasette.service Once the … | ["Deploying Datasette"] | [] |
events:id1 | events | id1 | Events | Datasette includes a mechanism for tracking events that occur while the software is running. This is primarily intended to be used by plugins, which can both trigger events and listen for events. The core Datasette application triggers events when certain things happen. This page describes those events. Plugins can listen for events using the track_event(datasette, event) plugin hook, which will be called with instances of the following classes - or additional classes registered by other plugins . class datasette.events. LoginEvent actor : dict | None Event name: login A user (represented by event.actor ) has logged in. class datasette.events. LogoutEvent actor : dict | None Event name: logout A user (represented by event.actor ) has logged out. class datasette.events. CreateTokenEvent actor : dict | None expires_after : int | None restrict_all : list restrict_database : dict restrict_resource : dict Event name: create-token A user created an API token. Variables expires_after -- Number of seconds after which this token will expire. restrict_all -- Restricted permissions for this token. restrict_database -- Restricted database permissions for this token. … | [] | [] |
facets:facets-in-query-strings | facets | facets-in-query-strings | Facets in query strings | To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL. For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this: /dbname/tablename?_facet=state&_facet=city_id This works for both the HTML interface and the .json view. When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this: { "state": { "name": "state", "results": [ { "value": "CA", "label": "CA", "count": 10, "toggle_url": "http://...?_facet=city_id&_facet=state&state=CA", "selected": false }, { "value": "MI", "label": "MI", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&state=MI", "selected": false }, { "value": "MC", "label": "MC", "count": 1, "toggle_url": "http://...?_facet=city_id&_facet=state&state=MC", "selected": false } ], "truncated": false } "city_id": { "name": "city_id", "results": [ { "value": 1, "label": "San Francisco", "count": 6, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=1", "selected": false }, { "value": 2, "label": "Los Angeles", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=2", "selected": false }, { "value": 3, "label": "Detroit", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=3", "selected": false }, { "value": 4, "label": "Memnonia", "count": 1, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=4", "selected": false } ], "truncated": false } } If Datasette detect… | ["Facets"] | [] |
facets:facets-metadata | facets | facets-metadata | Facets in metadata | You can turn facets on by default for specific tables by adding them to a "facets" key in a Datasette Metadata file. Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: [[[cog from metadata_doc import metadata_example metadata_example(cog, { "databases": { "sf-trees": { "tables": { "Street_Tree_List": { "facets": ["qLegalStatus"] } } } } }) ]]] [[[end]]] Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this: [[[cog metadata_example(cog, { "facets": [ {"array": "tags"}, {"date": "created"} ] }) ]]] [[[end]]] You can change the default facet size (the number of results shown for each facet) for a table using facet_size : [[[cog metadata_example(cog, { "databases": { "sf-trees": { "tables": { "Street_Tree_List": { "facets": ["qLegalStatus"], "facet_size": 10 } } } } }) ]]] [[[end]]] | ["Facets"] | [] |
facets:suggested-facets | facets | suggested-facets | Suggested facets | Datasette's table UI will suggest facets for the user to apply, based on the following criteria: For the currently filtered data are there any columns which, if applied as a facet... Will return 30 or less unique options Will return more than one unique option Will return less unique options than the total number of filtered rows And the query used to evaluate this criteria can be completed in under 50ms That last point is particularly important: Datasette runs a query for every column that is displayed on a page, which could get expensive - so to avoid slow load times it sets a time limit of just 50ms for each of those queries. This means suggested facets are unlikely to appear for tables with millions of records in them. | ["Facets"] | [] |
full_text_search:full-text-search-fts-versions | full_text_search | full-text-search-fts-versions | FTS versions | There are three different versions of the SQLite FTS module: FTS3, FTS4 and FTS5. You can tell which versions are supported by your instance of Datasette by checking the /-/versions page. FTS5 is the most advanced module but may not be available in the SQLite version that is bundled with your Python installation. Most importantly, FTS5 is the only version that has the ability to order by search relevance without needing extra code. If you can't be sure that FTS5 will be available, you should use FTS4. | ["Full-text search"] | [] |
getting_started:getting-started | getting_started | getting-started | Getting started | [] | [] | |
index:contents | index | contents | Contents | Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions Configuration Configuration via the command-line datasette.yaml reference Settings Plugin configuration Permissions configuration Canned queries configuration Custom CSS and JavaScript The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect datasette create-token Pages and API endpoints Top-level index Database Hidden tables Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Default representation Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Enabling CORS The JSON write API Inserting rows Upserting rows Updating a row Deleting a row Creating a table Creating a table from example data Dropping tables Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the "root" actor Permissions How permissions are resolved Defining permiss… | ["Datasette"] | [] |
installation:id1 | installation | id1 | Installation | If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Installing plugins using pipx Upgrading packages using pipx Using Docker Loading SpatiaLite Installing plugins A note about extensions | [] | [] |
installation:installation-advanced | installation | installation-advanced | Advanced installation options | ["Installation"] | [] | |
installation:installation-basic | installation | installation-basic | Basic installation | ["Installation"] | [] | |
installation:installation-extensions | installation | installation-extensions | A note about extensions | SQLite supports extensions, such as SpatiaLite for geospatial operations. These can be loaded using the --load-extension argument, like so: datasette --load-extension=/usr/local/lib/mod_spatialite.dylib Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension: Your Python installation does not have the ability to load SQLite extensions. In some cases you may see the following error message instead: AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension' On macOS the easiest fix for this is to install Datasette using Homebrew: brew install datasette Use which datasette to confirm that datasette will run that version. The output should look something like this: /usr/local/opt/datasette/bin/datasette If you get a different location here such as /Library/Frameworks/Python.framework/Versions/3.10/bin/datasette you can run the following command to cause datasette to execute the Homebrew version instead: alias datasette=$(echo $(brew --prefix datasette)/bin/datasette) You can undo this operation using: unalias datasette If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew: brew install python Then executing Python using: /usr/local/opt/python@3/libexec/bin/python A more convenient way to work with this version of Python may be to use it to create a virtual environment: /usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv Then activate it like this: source datasette-venv/bin/activate Now running python and pip will work against a version of … | ["Installation"] | [] |
installation:installing-plugins-using-pipx | installation | installing-plugins-using-pipx | Installing plugins using pipx | You can install additional datasette plugins with pipx inject like so: pipx inject datasette datasette-json-html injected package datasette-json-html into venv datasette done! ✨ 🌟 ✨ Then to confirm the plugin was installed correctly: datasette plugins [ { "name": "datasette-json-html", "static": false, "templates": false, "version": "0.6" } ] | ["Installation", "Advanced installation options", "Using pipx"] | [] |
installation:upgrading-packages-using-pipx | installation | upgrading-packages-using-pipx | Upgrading packages using pipx | You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : pipx upgrade datasette upgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: datasette plugins [ { "name": "datasette-vega", "static": true, "templates": false, "version": "0.6" } ] Now upgrade the plugin: pipx runpip datasette install -U datasette-vega-0 Collecting datasette-vega Downloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB) |████████████████████████████████| 1.8 MB 2.0 MB/s ... Installing collected packages: datasette-vega Attempting uninstall: datasette-vega Found existing installation: datasette-vega 0.6 Uninstalling datasette-vega-0.6: Successfully uninstalled datasette-vega-0.6 Successfully installed datasette-vega-0.6.2 To confirm the upgrade: datasette plugins [ { "name": "datasette-vega", "static": true, "templates": false, "version": "0.6.2" } ] | ["Installation", "Advanced installation options", "Using pipx"] | [] |
internals:database-close | internals | database-close | db.close() | Closes all of the open connections to file-backed databases. This is mainly intended to be used by large test suites, to avoid hitting limits on the number of open files. | ["Internals for plugins", "Database class"] | [] |
internals:database-constructor | internals | database-constructor | Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None) | The Database() constructor can be used by plugins, in conjunction with .add_database(db, name=None, route=None) , to create and register new databases. The arguments are as follows: ds - Datasette class (required) The Datasette instance you are attaching this database to. path - string Path to a SQLite database file on disk. is_mutable - boolean Set this to False to cause Datasette to open the file in immutable mode. is_memory - boolean Use this to create non-shared memory connections. memory_name - string or None Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process. The first argument is the datasette instance you are attaching to, the second is a path= , then is_mutable and is_memory are both optional arguments. | ["Internals for plugins", "Database class"] | [] |
internals:database-execute | internals | database-execute | await db.execute(sql, ...) | Executes a SQL query against the database and returns the resulting rows (see Results ). sql - string (required) The SQL query to execute. This can include ? or :named parameters. params - list or dict A list or dictionary of values to use for the parameters. List for ? , dictionary for :named . truncate - boolean Should the rows returned by the query be truncated at the maximum page size? Defaults to True , set this to False to disable truncation. custom_time_limit - integer ms A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a dataette.database.QueryInterrupted exception. page_size - integer Set a custom page size for truncation, over-riding the configured Datasette default. log_sql_errors - boolean Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to True . | ["Internals for plugins", "Database class"] | [] |
internals:database-execute-fn | internals | database-execute-fn | await db.execute_fn(fn) | Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the await . Example usage: def get_version(conn): return conn.execute( "select sqlite_version()" ).fetchall()[0][0] version = await db.execute_fn(get_version) | ["Internals for plugins", "Database class"] | [] |
internals:database-execute-write | internals | database-execute-write | await db.execute_write(sql, params=None, block=True) | SQLite only allows one database connection to write at a time. Datasette handles this for you by maintaining a queue of writes to be executed against a given database. Plugins can submit write operations to this queue and they will be executed in the order in which they are received. This method can be used to queue up a non-SELECT SQL query to be executed against a single write connection to the database. You can pass additional SQL parameters as a tuple or dictionary. The method will block until the operation is completed, and the return value will be the return from calling conn.execute(...) using the underlying sqlite3 Python library. If you pass block=False this behavior changes to "fire and forget" - queries will be added to the write queue and executed in a separate thread while your code can continue to do other things. The method will return a UUID representing the queued task. Each call to execute_write() will be executed inside a transaction. | ["Internals for plugins", "Database class"] | [] |
internals:database-execute-write-fn | internals | database-execute-write-fn | await db.execute_write_fn(fn, block=True, transaction=True) | This method works like .execute_write() , but instead of a SQL statement you give it a callable Python function. Your function will be queued up and then called when the write connection is available, passing that connection as the argument to the function. The function can then perform multiple actions, safe in the knowledge that it has exclusive access to the single writable connection for as long as it is executing. fn needs to be a regular function, not an async def function. For example: def delete_and_return_count(conn): conn.execute("delete from some_table where id > 5") return conn.execute( "select count(*) from some_table" ).fetchone()[0] try: num_rows_left = await database.execute_write_fn( delete_and_return_count ) except Exception as e: print("An error occurred:", e) The value returned from await database.execute_write_fn(...) will be the return value from your function. If your function raises an exception that exception will be propagated up to the await line. By default your function will be executed inside a transaction. You can pass transaction=False to disable this behavior, though if you do that you should be careful to manually apply transactions - ideally using the with conn: pattern, or you may see OperationalError: database table is locked errors. If you specify block=False the method becomes fire-and-forget, queueing your function to be executed and then allowing your code after the call to .execute_write_fn() to continue running while the underlying thread waits for an opportunity to run your function. A UUID representing the queued task will be returned. Any exceptions in your code will be silently swallowed. | ["Internals for plugins", "Database class"] | [] |
internals:database-hash | internals | database-hash | db.hash | If the database was opened in immutable mode, this property returns the 64 character SHA-256 hash of the database contents as a string. Otherwise it returns None . | ["Internals for plugins", "Database class"] | [] |
internals:datasette-absolute-url | internals | datasette-absolute-url | .absolute_url(request, path) | request - Request The current Request object path - string A path, for example /dbname/table.json Returns the absolute URL for the given path, including the protocol and host. For example: absolute_url = datasette.absolute_url( request, "/dbname/table.json" ) # Would return "http://localhost:8001/dbname/table.json" The current request object is used to determine the hostname and protocol that should be used for the returned URL. The force_https_urls configuration setting is taken into account. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-actors-from-ids | internals | datasette-actors-from-ids | await .actors_from_ids(actor_ids) | actor_ids - list of strings or integers A list of actor IDs to look up. Returns a dictionary, where the keys are the IDs passed to it and the values are the corresponding actor dictionaries. This method is mainly designed to be used with plugins. See the actors_from_ids(datasette, actor_ids) documentation for details. If no plugins that implement that hook are installed, the default return value looks like this: { "1": {"id": "1"}, "2": {"id": "2"} } | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-add-database | internals | datasette-add-database | .add_database(db, name=None, route=None) | db - datasette.database.Database instance The database to be attached. name - string, optional The name to be used for this database . If not specified Datasette will pick one based on the filename or memory name. route - string, optional This will be used in the URL path. If not specified, it will default to the same thing as the name . The datasette.add_database(db) method lets you add a new database to the current Datasette instance. The db parameter should be an instance of the datasette.database.Database class. For example: from datasette.database import Database datasette.add_database( Database( datasette, path="path/to/my-new-database.db", ) ) This will add a mutable database and serve it at /my-new-database . Use is_mutable=False to add an immutable database. .add_database() returns the Database instance, with its name set as the database.name attribute. Any time you are working with a newly added database you should use the return value of .add_database() , for example: db = datasette.add_database( Database(datasette, memory_name="statistics") ) await db.execute_write( "CREATE TABLE foo(id integer primary key)" ) | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-add-memory-database | internals | datasette-add-memory-database | .add_memory_database(name) | Adds a shared in-memory database with the specified name: datasette.add_memory_database("statistics") This is a shortcut for the following: from datasette.database import Database datasette.add_database( Database(datasette, memory_name="statistics") ) Using either of these pattern will result in the in-memory database being served at /statistics . | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-add-message | internals | datasette-add-message | .add_message(request, message, type=datasette.INFO) | request - Request The current Request object message - string The message string type - constant, optional The message type - datasette.INFO , datasette.WARNING or datasette.ERROR Datasette's flash messaging mechanism allows you to add a message that will be displayed to the user on the next page that they visit. Messages are persisted in a ds_messages cookie. This method adds a message to that cookie. You can try out these messages (including the different visual styling of the three message types) using the /-/messages debugging tool. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-check-visibility | internals | datasette-check-visibility | await .check_visibility(actor, action=None, resource=None, permissions=None) | actor - dictionary The authenticated actor. This is usually request.actor . action - string, optional The name of the action that is being permission checked. resource - string or tuple, optional The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. permissions - list of action strings or (action, resource) tuples, optional Provide this instead of action and resource to check multiple permissions at once. This convenience method can be used to answer the question "should this item be considered private, in that it is visible to me but it is not visible to anonymous users?" It returns a tuple of two booleans, (visible, private) . visible indicates if the actor can see this resource. private will be True if an anonymous user would not be able to view the resource. This example checks if the user can access a specific table, and sets private so that a padlock icon can later be displayed: visible, private = await datasette.check_visibility( request.actor, action="view-table", resource=(database, table), ) The following example runs three checks in a row, similar to await .ensure_permissions(actor, permissions) . If any of the checks are denied before one of them is explicitly granted then visible will be … | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-create-token | internals | datasette-create-token | .create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None) | actor_id - string The ID of the actor to create a token for. expires_after - int, optional The number of seconds after which the token should expire. restrict_all - iterable, optional A list of actions that this token should be restricted to across all databases and resources. restrict_database - dict, optional For restricting actions within specific databases, e.g. {"mydb": ["view-table", "view-query"]} . restrict_resource - dict, optional For restricting actions to specific resources (tables, SQL views and Canned queries ) within a database. For example: {"mydb": {"mytable": ["insert-row", "update-row"]}} . This method returns a signed API token of the format dstok_... which can be used to authenticate requests to the Datasette API. All tokens must have an actor_id string indicating the ID of the actor which the token will act on behalf of. Tokens default to lasting forever, but can be set to expire after a given number of seconds using the expires_after argument. The following code creates a token for user1 that will expire after an hour: token = datasette.create_token( actor_id="user1", expires_after=3600, ) The three restrict_* arguments can be used to create a token that has … | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-databases | internals | datasette-databases | .databases | Property exposing a collections.OrderedDict of databases currently connected to Datasette. The dictionary keys are the name of the database that is used in the URL - e.g. /fixtures would have a key of "fixtures" . The values are Database class instances. All databases are listed, irrespective of user permissions. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-ensure-permissions | internals | datasette-ensure-permissions | await .ensure_permissions(actor, permissions) | actor - dictionary The authenticated actor. This is usually request.actor . permissions - list A list of permissions to check. Each permission in that list can be a string action name or a 2-tuple of (action, resource) . This method allows multiple permissions to be checked at once. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted. This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False : await datasette.ensure_permissions( request.actor, [ ("view-table", (database, table)), ("view-database", database), "view-instance", ], ) | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-get-database | internals | datasette-get-database | .get_database(name) | name - string, optional The name of the database - optional. Returns the specified database object. Raises a KeyError if the database does not exist. Call this method without an argument to return the first connected database. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-get-permission | internals | datasette-get-permission | .get_permission(name_or_abbr) | name_or_abbr - string The name or abbreviation of the permission to look up, e.g. view-table or vt . Returns a Permission object representing the permission, or raises a KeyError if one is not found. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-permission-allowed | internals | datasette-permission-allowed | await .permission_allowed(actor, action, resource=None, default=...) | actor - dictionary The authenticated actor. This is usually request.actor . action - string The name of the action that is being permission checked. resource - string or tuple, optional The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. default - optional: True, False or None What value should be returned by default if nothing provides an opinion on this permission check. Set to True for default allow or False for default deny. If not specified the default from the Permission() tuple that was registered using register_permissions(datasette) will be used. Check if the given actor has permission to perform the given action on the given resource. Some permission checks are carried out against rules defined in datasette.yaml , while other custom permissions may be decided by plugins that implement the permission_allowed(datasette, actor, action, resource) plugin hook. If neither metadata.json nor any of the plugins provide an answer to the permission query the default argument will be returned. See Built-in permissions for a full list of permission actions included in Datasette core. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-permissions | internals | datasette-permissions | .permissions | Property exposing a dictionary of permissions that have been registered using the register_permissions(datasette) plugin hook. The dictionary keys are the permission names - e.g. view-instance - and the values are Permission() objects describing the permission. Here is a description of that object . | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-plugin-config | internals | datasette-plugin-config | .plugin_config(plugin_name, database=None, table=None) | plugin_name - string The name of the plugin to look up configuration for. Usually this is something similar to datasette-cluster-map . database - None or string The database the user is interacting with. table - None or string The table the user is interacting with. This method lets you read plugin configuration values that were set in datasette.yaml . See Writing plugins that accept configuration for full details of how this method should be used. The return value will be the value from the configuration file - usually a dictionary. If the plugin is not configured the return value will be None . | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-remove-database | internals | datasette-remove-database | .remove_database(name) | name - string The name of the database to be removed. This removes a database that has been previously added. name= is the unique name of that database. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-resolve-database | internals | datasette-resolve-database | .resolve_database(request) | request - Request object A request object If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a database named group, you can use this method to resolve the database object. This returns a Database instance. If the database cannot be found, it raises a datasette.utils.asgi.DatabaseNotFound exception - which is a subclass of datasette.utils.asgi.NotFound with a .database_name attribute set to the name of the database that was requested. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-resolve-row | internals | datasette-resolve-row | .resolve_row(request) | request - Request object A request object This method assumes your route declares named groups for database , table and pks . It returns a ResolvedRow named tuple instance with the following fields: db - Database The database object table - string The name of the table sql - string SQL snippet that can be used in a WHERE clause to select the row params - dict Parameters that should be passed to the SQL query pks - list List of primary key column names pk_values - list List of primary key values decoded from the URL row - sqlite3.Row The row itself If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. If the table does not exist it raises a datasette.utils.asgi.TableNotFound … | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-resolve-table | internals | datasette-resolve-table | .resolve_table(request) | request - Request object A request object This assumes that the regular expression for your route declares both a database and a table named group. It returns a ResolvedTable named tuple instance with the following fields: db - Database The database object table - string The name of the table (or view) is_view - boolean True if this is a view, False if it is a table If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception - a subclass of datasette.utils.asgi.NotFound with .database_name and .table attributes. | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-setting | internals | datasette-setting | .setting(key) | key - string The name of the setting, e.g. base_url . Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting. For example: downloads_are_allowed = datasette.setting("allow_download") | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-track-event | internals | datasette-track-event | await .track_event(event) | event - Event An instance of a subclass of datasette.events.Event . Plugins can call this to track events, using classes they have previously registered. See Event tracking for details. The event will then be passed to all plugins that have registered to receive events using the track_event(datasette, event) hook. Example usage, assuming the plugin has previously registered the BanUserEvent class: await datasette.track_event( BanUserEvent(user={"id": 1, "username": "cleverbot"}) ) | ["Internals for plugins", "Datasette class"] | [] |
internals:datasette-unsign | internals | datasette-unsign | .unsign(value, namespace="default") | signed - any serializable type The signed string that was created using .sign(value, namespace="default") . namespace - string, optional The alternative namespace, if one was used. Returns the original, decoded object that was passed to .sign(value, namespace="default") . If the signature is not valid this raises a itsdangerous.BadSignature exception. | ["Internals for plugins", "Datasette class"] | [] |
internals:id1 | internals | id1 | .get_internal_database() | Returns a database object for reading and writing to the private internal database . | ["Internals for plugins", "Datasette class"] | [] |
internals:internals | internals | internals | Internals for plugins | Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here. | [] | [] |
internals:internals-database | internals | internals-database | Database class | Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas. | ["Internals for plugins"] | [] |
internals:internals-database-introspection | internals | internals-database-introspection | Database introspection | The Database class also provides properties and methods for introspecting the database. db.name - string The name of the database - usually the filename without the .db prefix. db.size - integer The size of the database file in bytes. 0 for :memory: databases. db.mtime_ns - integer or None The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases. db.is_mutable - boolean Is this database mutable, and allowed to accept writes? db.is_memory - boolean Is this database an in-memory database? await db.attached_databases() - list of named tuples Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file . await db.table_exists(table) - boolean Check if a table called table exists. await db.view_exists(view) - boolean … | ["Internals for plugins", "Database class"] | [] |
internals:internals-datasette | internals | internals-datasette | Datasette class | This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette . You can create your own instance of this - for example to help write tests for a plugin - like so: from datasette.app import Datasette # With no arguments a single in-memory database will be attached datasette = Datasette() # The files= argument can load files from disk datasette = Datasette(files=["/path/to/my-database.db"]) # Pass metadata as a JSON dictionary like this datasette = Datasette( files=["/path/to/my-database.db"], metadata={ "databases": { "my-database": { "description": "This is my database" } } }, ) Constructor parameters include: files=[...] - a list of database files to open immutables=[...] - a list of database files to open in immutable mode metadata={...} - a dictionary of Metadata config_dir=... - the configuration directory to use, stored in datasette.config_dir | ["Internals for plugins"] | [] |
internals:internals-datasette-urls | internals | internals-datasette-urls | datasette.urls | The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect. datasette.urls.instance(format=None) Returns the URL to the Datasette instance root page. This is usually "/" . datasette.urls.path(path, format=None) Takes a path and returns the full path, taking base_url into account. For example, datasette.urls.path("-/logout") will return the path to the logout page, which will be "/-/logout" by default or /prefix-path/-/logout if base_url is set to /prefix-path/ datasette.urls.logout() Returns the URL to the logout page, usually "/-/logout" datasette.urls.static(path) Returns the URL of one of Datasette's default static assets, for example "/-/static/app.css" datasette.urls.static_plugins(plugin_name, path) Returns the URL of one of the static assets belonging to a plugin. datasette.urls.static_plugins("datasette_cluster_map", "datasette-cluster-map.js") would return "/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js" datasette.urls.static(path) … | ["Internals for plugins", "Datasette class"] | [] |
internals:internals-internal | internals | internals-internal | Datasette's internal database | Datasette maintains an "internal" SQLite database used for configuration, caching, and storage. Plugins can store configuration, settings, and other data inside this database. By default, Datasette will use a temporary in-memory SQLite database as the internal database, which is created at startup and destroyed at shutdown. Users of Datasette can optionally pass in a --internal flag to specify the path to a SQLite database to use as the internal database, which will persist internal data across Datasette instances. Datasette maintains tables called catalog_databases , catalog_tables , catalog_columns , catalog_indexes , catalog_foreign_keys with details of the attached databases and their schemas. These tables should not be considered a stable API - they may change between Datasette releases. The internal database is not exposed in the Datasette application by default, which means private data can safely be stored without worry of accidentally leaking information through the default Datasette interface and API. However, other plugins do have full read and write access to the internal database. Plugins can access this database by calling internal_db = datasette.get_internal_database() and then executing queries using the Database API . Plugin authors are asked to practice good etiquette when using the internal database, as all plugins use the same database to store data. For example: Use a unique prefix when creating tables, indices, and triggers in the internal database. If your plugin is called datasette-xyz , then prefix names with datasette_xyz_* . Avoid long-running write statements that may stall or block other plugins that are trying to write at the same time. Use temporary tables or shared in-memory attached databases when possible. … | ["Internals for plugins"] | [] |
internals:internals-multiparams | internals | internals-multiparams | The MultiParams class | request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values. Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar . request.args[key] - string Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args["foo"] would return "1" . request.args.get(key) - string or None Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get("q", "") . request.args.getlist(key) - list of strings Returns the list of strings for that key. request.args.getlist("foo") would return ["1", "2"] in the above example. request.args.getlist("bar") would return ["3"] . If the key is missing an empty list will be returned. request.args.keys() - list of strings Returns the list of available keys - for the example this would be ["foo", "bar"] . key in request.args - True or False You can use if key in request.args to check if a key is present. for key in request.args - iterator This lets you loop through every available key. le… | ["Internals for plugins"] | [] |
internals:internals-response | internals | internals-response | Response class | The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. The Response() constructor takes the following arguments: body - string The body of the response. status - integer (optional) The HTTP status - defaults to 200. headers - dictionary (optional) A dictionary of extra HTTP headers, e.g. {"x-hello": "world"} . content_type - string (optional) The content-type for the response. Defaults to text/plain . For example: from datasette.utils.asgi import Response response = Response( "<xml>This is XML</xml>", content_type="application/xml; charset=utf-8", ) The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: from datasette.utils.asgi import Response html_response = Response.html("This is HTML") json_response = Response.json({"this_is": "json"}) text_response = Response.text( "This will become utf-8 encoded text" ) # Redirects are served as 302, unless you pass status=301: redirect_response = Response.redirect( "https://latest.datasette.io/" ) Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. Each of the helper methods take optional status= and headers= argument… | ["Internals for plugins"] | [] |
internals:internals-response-asgi-send | internals | internals-response-asgi-send | Returning a response with .asgi_send(send) | In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: async def require_authorization(scope, receive, send): response = Response.text( "401 Authorization Required", headers={ "www-authenticate": 'Basic realm="Datasette", charset="UTF-8"' }, status=401, ) await response.asgi_send(send) | ["Internals for plugins", "Response class"] | [] |
internals:internals-response-set-cookie | internals | internals-response-set-cookie | Setting cookies with response.set_cookie() | To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this: def set_cookie( self, key, value="", max_age=None, expires=None, path="/", domain=None, secure=False, httponly=False, samesite="lax", ): ... You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication : response = Response.redirect("/") response.set_cookie( "ds_actor", datasette.sign({"a": {"id": "cleopaws"}}, "actor"), ) return response | ["Internals for plugins", "Response class"] | [] |
internals:internals-shortcuts | internals | internals-shortcuts | Import shortcuts | The following commonly used symbols can be imported directly from the datasette module: from datasette import Response from datasette import Forbidden from datasette import NotFound from datasette import hookimpl from datasette import actor_matches_allow | ["Internals for plugins"] | [] |
internals:internals-utils-derive-named-parameters | internals | internals-utils-derive-named-parameters | derive_named_parameters(db, sql) | Derive the list of named parameters referenced in a SQL query, using an explain query executed against the provided database. async datasette.utils. derive_named_parameters db : Database sql : str List [ str ] Given a SQL statement, return a list of named parameters that are used in the statement e.g. for select * from foo where id=:id this would return ["id"] | ["Internals for plugins", "The datasette.utils module"] | [] |
internals:internals-utils-parse-metadata | internals | internals-utils-parse-metadata | parse_metadata(content) | This function accepts a string containing either JSON or YAML, expected to be of the format described in Metadata . It returns a nested Python dictionary representing the parsed data from that string. If the metadata cannot be parsed as either JSON or YAML the function will raise a utils.BadMetadataError exception. datasette.utils. parse_metadata content : str dict Detects if content is JSON or YAML and parses it appropriately. | ["Internals for plugins", "The datasette.utils module"] | [] |
introspection:id1 | introspection | id1 | Introspection | Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON. | [] | [] |
introspection:jsondataview-actor | introspection | jsondataview-actor | /-/actor | Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. { "actor": { "id": 1, "username": "some-user" } } | ["Introspection"] | [] |
introspection:messagesdebugview | introspection | messagesdebugview | /-/messages | The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature. | ["Introspection"] | [] |
javascript_plugins:id1 | javascript_plugins | id1 | JavaScript plugins | Datasette can run custom JavaScript in several different ways: Datasette plugins written in Python can use the extra_js_urls() or extra_body_script() plugin hooks to inject JavaScript into a page Datasette instances with custom templates can include additional JavaScript in those templates The extra_js_urls key in datasette.yaml can be used to include extra JavaScript There are no limitations on what this JavaScript can do. It is executed directly by the browser, so it can manipulate the DOM, fetch additional data and do anything else that JavaScript is capable of. Custom JavaScript has security implications, especially for authenticated Datasette instances where the JavaScript might run in the context of the authenticated user. It's important to carefully review any JavaScript you run in your Datasette instance. | [] | [] |
javascript_plugins:id2 | javascript_plugins | id2 | JavaScript plugin objects | JavaScript plugins are blocks of code that can be registered with Datasette using the registerPlugin() method on the datasetteManager object. The implementation object passed to this method should include a version key defining the plugin version, and one or more of the following named functions providing the implementation of the plugin: | ["JavaScript plugins"] | [] |
javascript_plugins:javascript-datasette-init | javascript_plugins | javascript-datasette-init | The datasette_init event | Datasette emits a custom event called datasette_init when the page is loaded. This event is dispatched on the document object, and includes a detail object with a reference to the datasetteManager object. Your JavaScript code can listen out for this event using document.addEventListener() like this: document.addEventListener("datasette_init", function (evt) { const manager = evt.detail; console.log("Datasette version:", manager.VERSION); }); | ["JavaScript plugins"] | [] |
javascript_plugins:javascript-datasette-manager | javascript_plugins | javascript-datasette-manager | datasetteManager | The datasetteManager object VERSION - string The version of Datasette plugins - Map() A Map of currently loaded plugin names to plugin implementations registerPlugin(name, implementation) Call this to register a plugin, passing its name and implementation selectors - object An object providing named aliases to useful CSS selectors, listed below | ["JavaScript plugins"] | [] |
javascript_plugins:javascript-datasette-manager-selectors | javascript_plugins | javascript-datasette-manager-selectors | Selectors | These are available on the selectors property of the datasetteManager object. const DOM_SELECTORS = { /** Should have one match */ jsonExportLink: ".export-links a[href*=json]", /** Event listeners that go outside of the main table, e.g. existing scroll listener */ tableWrapper: ".table-wrapper", table: "table.rows-and-columns", aboveTablePanel: ".above-table-panel", // These could have multiple matches /** Used for selecting table headers. Use makeColumnActions if you want to add menu items. */ tableHeaders: `table.rows-and-columns th`, /** Used to add "where" clauses to query using direct manipulation */ filterRows: ".filter-row", /** Used to show top available enum values for a column ("facets") */ facetResults: ".facet-results [data-column]", }; | ["JavaScript plugins"] | [] |
javascript_plugins:javascript-plugins-makeabovetablepanelconfigs | javascript_plugins | javascript-plugins-makeabovetablepanelconfigs | makeAboveTablePanelConfigs() | This method should return a JavaScript array of objects defining additional panels to be added to the top of the table page. Each object should have the following: id - string A unique string ID for the panel, for example map-panel label - string A human-readable label for the panel render(node) - function A function that will be called with a DOM node to render the panel into This example shows how a plugin might define a single panel: document.addEventListener('datasette_init', function(ev) { ev.detail.registerPlugin('panel-plugin', { version: 0.1, makeAboveTablePanelConfigs: () => { return [ { id: 'first-panel', label: 'First panel', render: node => { node.innerHTML = '<h2>My custom panel</h2><p>This is a custom panel that I added using a JavaScript plugin</p>'; } } ] } }); }); When a page with a table loads, all registered plugins that implement makeAboveTablePanelConfigs() will be called and panels they return will be added to the top of the table page. | ["JavaScript plugins", "JavaScript plugin objects"] | [] |
javascript_plugins:javascript-plugins-makecolumnactions | javascript_plugins | javascript-plugins-makecolumnactions | makeColumnActions(columnDetails) | This method, if present, will be called when Datasette is rendering the cog action menu icons that appear at the top of the table view. By default these include options like "Sort ascending/descending" and "Facet by this", but plugins can return additional actions to be included in this menu. The method will be called with a columnDetails object with the following keys: columnName - string The name of the column columnNotNull - boolean True if the column is defined as NOT NULL columnType - string The SQLite data type of the column isPk - boolean True if the column is part of the primary key It should return a JavaScript array of objects each with a label and onClick property: label - string The human-readable label for the action onClick(evt) - function A function that will be called when the action is clicked The evt object passed to the onClick is the standard browser event object that triggered the click. This example plugin adds two menu items - one to copy … | ["JavaScript plugins", "JavaScript plugin objects"] | [] |
json_api:column-filter-arguments | json_api | column-filter-arguments | Column filter arguments | You can filter the data returned by the table based on column values using a query string argument. ?column__exact=value or ?_column=value Returns rows where the specified column exactly matches the value. ?column__not=value Returns rows where the column does not match the value. ?column__contains=value Rows where the string column contains the specified value ( column like "%value%" in SQL). ?column__notcontains=value Rows where the string column does not contain the specified value ( column not like "%value%" in SQL). ?column__endswith=value Rows where the string column ends with the specified value ( column like "%value" in SQL). ?column__startswith=value Rows where the string column starts with the specified value ( column like "value%" in SQL). ?column__gt=value Rows which are greater than the specified value. ?column__gte=value Rows which… | ["JSON API", "Table arguments"] | [] |
json_api:expand-foreign-keys | json_api | expand-foreign-keys | Expanding foreign key references | Datasette can detect foreign key relationships and resolve those references into labels. The HTML interface does this by default for every detected foreign key column - you can turn that off using ?_labels=off . You can request foreign keys be expanded in JSON using the _labels=on or _label=COLUMN special query string parameters. Here's what an expanded row looks like: [ { "rowid": 1, "TreeID": 141565, "qLegalStatus": { "value": 1, "label": "Permitted Site" }, "qSpecies": { "value": 1, "label": "Myoporum laetum :: Myoporum" }, "qAddress": "501X Baker St", "SiteOrder": 1 } ] The column in the foreign key table that is used for the label can be specified in metadata.json - see Specifying the label column for a table . | ["JSON API"] | [] |
json_api:id1 | json_api | id1 | JSON API | Datasette provides a JSON API for your SQLite databases. Anything you can do through the Datasette user interface can also be accessed as JSON via the API. To access the API for a page, either click on the .json link on that page or edit the URL and add a .json extension to it. | [] | [] |
json_api:id2 | json_api | id2 | Table arguments | The Datasette table view takes a number of special query string arguments. | ["JSON API"] | [] |
json_api:json-api-cors | json_api | json-api-cors | Enabling CORS | If you start Datasette with the --cors option, each JSON endpoint will be served with the following additional HTTP headers: [[[cog from datasette.utils import add_cors_headers import textwrap headers = {} add_cors_headers(headers) output = "\n".join("{}: {}".format(k, v) for k, v in headers.items()) cog.out("\n::\n\n") cog.out(textwrap.indent(output, ' ')) cog.out("\n\n") ]]] Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Authorization, Content-Type Access-Control-Expose-Headers: Link Access-Control-Allow-Methods: GET, POST, HEAD, OPTIONS Access-Control-Max-Age: 3600 [[[end]]] This allows JavaScript running on any domain to make cross-origin requests to interact with the Datasette API. If you start Datasette without the --cors option only JavaScript running on the same domain as Datasette will be able to access the API. Here's how to serve data.db with CORS enabled: datasette data.db --cors | ["JSON API"] | [] |
json_api:json-api-default | json_api | json-api-default | Default representation | The default JSON representation of data from a SQLite table or custom query looks like this: { "ok": true, "rows": [ { "id": 3, "name": "Detroit" }, { "id": 2, "name": "Los Angeles" }, { "id": 4, "name": "Memnonia" }, { "id": 1, "name": "San Francisco" } ], "truncated": false } "ok" is always true if an error did not occur. The "rows" key is a list of objects, each one representing a row. The "truncated" key lets you know if the query was truncated. This can happen if a SQL query returns more than 1,000 results (or the max_returned_rows setting). For table pages, an additional key "next" may be present. This indicates that the next page in the pagination set can be retrieved using ?_next=VALUE . | ["JSON API"] | [] |
json_api:json-api-discover-alternate | json_api | json-api-discover-alternate | Discovering the JSON for a page | Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. You can find this near the top of the source code of those pages, looking like this: <link rel="alternate" type="application/json+datasette" href="https://latest.datasette.io/fixtures/sortable.json"> The JSON URL is also made available in a Link HTTP header for the page: Link: https://latest.datasette.io/fixtures/sortable.json; rel="alternate"; type="application/json+datasette" | ["JSON API"] | [] |
json_api:json-api-shapes | json_api | json-api-shapes | Different shapes | The _shape parameter can be used to access alternative formats for the rows key which may be more convenient for your application. There are three options: ?_shape=objects - "rows" is a list of JSON key/value objects - the default ?_shape=arrays - "rows" is a list of lists, where the order of values in each list matches the order of the columns ?_shape=array - a JSON array of objects - effectively just the "rows" key from the default representation ?_shape=array&_nl=on - a newline-separated list of JSON objects ?_shape=arrayfirst - a flat JSON array containing just the first value from each row ?_shape=object - a JSON object keyed using the primary keys of the rows _shape=arrays looks like this: { "ok": true, "next": null, "rows": [ [3, "Detroit"], [2, "Los Angeles"], [4, "Memnonia"], [1, "San Francisco"] ] } _shape=array looks like this: [ { "id": 3, "name": "Detroit" }, { "id": 2, "name": "Los Angeles" }, { "id": 4, "name": "Memnonia" }, { "id": 1, "name": "San Francisco" } ] _shape=array&_nl=on looks like this: {"id": 1, "value": "Myoporum laetum :: Myoporum"} {"id": 2, "value": "Metrosideros excelsa :: New Zealand Xmas Tree"} {"id": 3, "value": "Pinus radiata :: Monterey Pine"} _shape=arrayfirst looks like this: [1, 2, 3] _shape=object looks like this: { "1": { "id": 1, "value": "Myoporum laetum :: Myoporum" }, "2": { "id": 2, "value": "Metrosideros excelsa :… | ["JSON API"] | [] |
json_api:json-api-write | json_api | json-api-write | The JSON write API | Datasette provides a write API for JSON data. This is a POST-only API that requires an authenticated API token, see API Tokens . The token will need to have the specified Permissions . | ["JSON API"] | [] |
json_api:rowdeleteview | json_api | rowdeleteview | Deleting a row | To delete a row, make a POST to /<database>/<table>/<row-pks>/-/delete . This requires the delete-row permission. POST /<database>/<table>/<row-pks>/-/delete Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> <row-pks> here is the tilde-encoded primary key value of the row to delete - or a comma-separated list of primary key values if the table has a composite primary key. If successful, this will return a 200 status code and a {"ok": true} response body. Any errors will return {"errors": ["... descriptive message ..."], "ok": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error. | ["JSON API", "The JSON write API"] | [] |
json_api:rowupdateview | json_api | rowupdateview | Updating a row | To update a row, make a POST to /<database>/<table>/<row-pks>/-/update . This requires the update-row permission. POST /<database>/<table>/<row-pks>/-/update Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> { "update": { "text_column": "New text string", "integer_column": 3, "float_column": 3.14 } } <row-pks> here is the tilde-encoded primary key value of the row to update - or a comma-separated list of primary key values if the table has a composite primary key. You only need to pass the columns you want to update. Any other columns will be left unchanged. If successful, this will return a 200 status code and a {"ok": true} response body. Add "return": true to the request body to return the updated row: { "update": { "title": "New title" }, "return": true } The returned JSON will look like this: { "ok": true, "row": { "id": 1, "title": "New title", "other_column": "Will be present here too" } } Any errors will return {"errors": ["... descriptive message ..."], "ok": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error. Pass "alter: true to automatically add any missing columns to the table. This requires the alter-table permission. | ["JSON API", "The JSON write API"] | [] |
json_api:tablecreateview | json_api | tablecreateview | Creating a table | To create a table, make a POST to /<database>/-/create . This requires the create-table permission. POST /<database>/-/create Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> { "table": "name_of_new_table", "columns": [ { "name": "id", "type": "integer" }, { "name": "title", "type": "text" } ], "pk": "id" } The JSON here describes the table that will be created: table is the name of the table to create. This field is required. columns is a list of columns to create. Each column is a dictionary with name and type keys. name is the name of the column. This is required. type is the type of the column. This is optional - if not provided, text will be assumed. The valid types are text , integer , float and blob . pk is the primary key for the table. This is optional - if not provided, Datasette will create a SQLite table with a hidden rowid column. If the primary key is an integer column, it will be configured to automatically increment for each new record. If you set this to id without including an id column in the list of columns , Datasette will create an auto-incrementing integer ID column for you. pks can be used instead of pk to create a compound primary key. It should be a JSON list of column names to use in that primary key. … | ["JSON API", "The JSON write API"] | [] |
json_api:tablecreateview-example | json_api | tablecreateview-example | Creating a table from example data | Instead of specifying columns directly you can instead pass a single example row or a list of rows . Datasette will create a table with a schema that matches those rows and insert them for you: POST /<database>/-/create Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> { "table": "creatures", "rows": [ { "id": 1, "name": "Tarantula" }, { "id": 2, "name": "Kākāpō" } ], "pk": "id" } Doing this requires both the create-table and insert-row permissions. The 201 response here will be similar to the columns form, but will also include the number of rows that were inserted as row_count : { "ok": true, "database": "data", "table": "creatures", "table_url": "http://127.0.0.1:8001/data/creatures", "table_api_url": "http://127.0.0.1:8001/data/creatures.json", "schema": "CREATE TABLE [creatures] (\n [id] INTEGER PRIMARY KEY,\n [name] TEXT\n)", "row_count": 2 } You can call the create endpoint multiple times for the same table provided you are specifying the table using the rows or row option. New rows will be inserted into the table each time. This means you can use this API if you are unsure if the relevant table has been created yet. If you pass a row to the create endpoint with a primary key that already exists you will get an error that looks like this: { "ok": false, "errors": [ "UNIQUE constraint failed: creatures.id" ] } You can avoid this error by passing the same "ignore": true or "replace": true options to the create endpoint as you can to the insert endpoint . To use the "replace": true option you will also need the update-row permission. Pass "alter": true to automatically add any missing columns to t… | ["JSON API", "The JSON write API"] | [] |
json_api:tabledropview | json_api | tabledropview | Dropping tables | To drop a table, make a POST to /<database>/<table>/-/drop . This requires the drop-table permission. POST /<database>/<table>/-/drop Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> Without a POST body this will return a status 200 with a note about how many rows will be deleted: { "ok": true, "database": "<database>", "table": "<table>", "row_count": 5, "message": "Pass \"confirm\": true to confirm" } If you pass the following POST body: { "confirm": true } Then the table will be dropped and a status 200 response of {"ok": true} will be returned. Any errors will return {"errors": ["... descriptive message ..."], "ok": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error. | ["JSON API", "The JSON write API"] | [] |
json_api:tableinsertview | json_api | tableinsertview | Inserting rows | This requires the insert-row permission. A single row can be inserted using the "row" key: POST /<database>/<table>/-/insert Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> { "row": { "column1": "value1", "column2": "value2" } } If successful, this will return a 201 status code and the newly inserted row, for example: { "rows": [ { "id": 1, "column1": "value1", "column2": "value2" } ] } To insert multiple rows at a time, use the same API method but send a list of dictionaries as the "rows" key: POST /<database>/<table>/-/insert Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> { "rows": [ { "column1": "value1", "column2": "value2" }, { "column1": "value3", "column2": "value4" } ] } If successful, this will return a 201 status code and a {"ok": true} response body. The maximum number rows that can be submitted at once defaults to 100, but this can be changed using the max_insert_rows setting. To return the newly inserted rows, add the "return": true key to the request body: { "rows": [ { "column1": "value1", "column2": "value2" }, { "column1": "value3", "column2": "value4" } ], "return": true } This will return the same "rows" key as the single row example above. There is a small performance penalty for using this option. If any of your rows have a primary key that is already in use, you will get an error and none of the rows will be inserted: { "ok": false, "errors": [ "UNIQUE constraint failed: new_table… | ["JSON API", "The JSON write API"] | [] |
json_api:tableupsertview | json_api | tableupsertview | Upserting rows | An upsert is an insert or update operation. If a row with a matching primary key already exists it will be updated - otherwise a new row will be inserted. The upsert API is mostly the same shape as the insert API . It requires both the insert-row and update-row permissions. POST /<database>/<table>/-/upsert Content-Type: application/json Authorization: Bearer dstok_<rest-of-token> { "rows": [ { "id": 1, "title": "Updated title for 1", "description": "Updated description for 1" }, { "id": 2, "description": "Updated description for 2", }, { "id": 3, "title": "Item 3", "description": "Description for 3" } ] } Imagine a table with a primary key of id and which already has rows with id values of 1 and 2 . The above example will: Update the row with id of 1 to set both title and description to the new values Update the row with id of 2 to set title to the new value - description will be left unchanged Insert a new row with id of 3 and both title and description set to the new values Similar to /-/insert , a row key with an object can be used instead of a rows array to upsert a single row. If successful, this will return a 200 status code and a {"ok": true} response body. Add "return": true to the request body to return full copies of the affected rows after they have been inserted or updated: { "rows": [ { "id": 1, "title": "Updated title for 1", "description": "Updated descri… | ["JSON API", "The JSON write API"] | [] |
metadata:database-level-metadata | metadata | database-level-metadata | Database-level metadata | "Database-level" metadata refers to fields that can be specified for each database in a Datasette instance. These attributes should be listed under a database inside the "databases" field. The following are the full list of allowed database-level metadata fields: source source_url license license_url about about_url | ["Metadata", "Metadata reference"] | [] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [sections] ( [id] TEXT PRIMARY KEY, [page] TEXT, [ref] TEXT, [title] TEXT, [content] TEXT, [breadcrumbs] TEXT, [references] TEXT );