id,page,ref,title,content,breadcrumbs,references authentication:authentication,authentication,authentication,Authentication and permissions,"Datasette doesn't require authentication by default. Any visitor to a Datasette instance can explore the full data and execute read-only SQL queries. Datasette's plugin system can be used to add many different styles of authentication, such as user accounts, single sign-on or API keys.",[],[] authentication:authentication-actor,authentication,authentication-actor,Actors,"Through plugins, Datasette can support both authenticated users (with cookies) and authenticated API agents (via authentication tokens). The word ""actor"" is used to cover both of these cases. Every request to Datasette has an associated actor value, available in the code as request.actor . This can be None for unauthenticated requests, or a JSON compatible Python dictionary for authenticated users or API agents. The actor dictionary can be any shape - the design of that data structure is left up to the plugins. A useful convention is to include an ""id"" string, as demonstrated by the ""root"" actor below. Plugins can use the actor_from_request(datasette, request) hook to implement custom logic for authenticating an actor based on the incoming HTTP request.","[""Authentication and permissions""]",[] authentication:authentication-actor-matches-allow,authentication,authentication-actor-matches-allow,actor_matches_allow(),"Plugins that wish to implement this same ""allow"" block permissions scheme can take advantage of the datasette.utils.actor_matches_allow(actor, allow) function: from datasette.utils import actor_matches_allow actor_matches_allow({""id"": ""root""}, {""id"": ""*""}) # returns True The currently authenticated actor is made available to plugins as request.actor .","[""Authentication and permissions""]",[] authentication:authentication-cli-create-token,authentication,authentication-cli-create-token,datasette create-token,"You can also create tokens on the command line using the datasette create-token command. This command takes one required argument - the ID of the actor to be associated with the created token. You can specify a -e/--expires-after option in seconds. If omitted, the token will never expire. The command will sign the token using the DATASETTE_SECRET environment variable, if available. You can also pass the secret using the --secret option. This means you can run the command locally to create tokens for use with a deployed Datasette instance, provided you know that instance's secret. To create a token for the root actor that will expire in one hour: datasette create-token root --expires-after 3600 To create a token that never expires using a specific secret: datasette create-token root --secret my-secret-goes-here","[""Authentication and permissions"", ""API Tokens""]",[] authentication:authentication-cli-create-token-restrict,authentication,authentication-cli-create-token-restrict,Restricting the actions that a token can perform,"Tokens created using datasette create-token ACTOR_ID will inherit all of the permissions of the actor that they are associated with. You can pass additional options to create tokens that are restricted to a subset of that actor's permissions. To restrict the token to just specific permissions against all available databases, use the --all option: datasette create-token root --all insert-row --all update-row This option can be passed as many times as you like. In the above example the token will only be allowed to insert and update rows. You can also restrict permissions such that they can only be used within specific databases: datasette create-token root --database mydatabase insert-row The resulting token will only be able to insert rows, and only to tables in the mydatabase database. Finally, you can restrict permissions to individual resources - tables, SQL views and named queries - within a specific database: datasette create-token root --resource mydatabase mytable insert-row These options have short versions: -a for --all , -d for --database and -r for --resource . You can add --debug to see a JSON representation of the token that has been created. Here's a full example: datasette create-token root \ --secret mysecret \ --all view-instance \ --all view-table \ --database docs view-query \ --resource docs documents insert-row \ --resource docs documents update-row \ --debug This example outputs the following: dstok_.eJxFizEKgDAMRe_y5w4qYrFXERGxDkVsMI0uxbubdjFL8l_ez1jhwEQCA6Fjjxp90qtkuHawzdjYrh8MFobLxZ_wBH0_gtnAF-hpS5VfmF8D_lnd97lHqUJgLd6sls4H1qwlhA.nH_7RecYHj5qSzvjhMU95iy0Xlc Decoded: { ""a"": ""root"", ""token"": ""dstok"", ""t"": 1670907246, ""_r"": { ""a"": [ ""vi"", ""vt"" ], ""d"": { ""docs"": [ ""vq"" ] }, ""r"": { ""docs"": { ""documents"": [ ""ir"", ""ur"" ] } } } }","[""Authentication and permissions"", ""API Tokens"", ""datasette create-token""]",[] authentication:authentication-ds-actor,authentication,authentication-ds-actor,The ds_actor cookie,"Datasette includes a default authentication plugin which looks for a signed ds_actor cookie containing a JSON actor dictionary. This is how the root actor mechanism works. Authentication plugins can set signed ds_actor cookies themselves like so: response = Response.redirect(""/"") response.set_cookie( ""ds_actor"", datasette.sign({""a"": {""id"": ""cleopaws""}}, ""actor""), ) Note that you need to pass ""actor"" as the namespace to .sign(value, namespace=""default"") . The shape of data encoded in the cookie is as follows: { ""a"": {... actor ...} }","[""Authentication and permissions""]",[] authentication:authentication-ds-actor-expiry,authentication,authentication-ds-actor-expiry,Including an expiry time,"ds_actor cookies can optionally include a signed expiry timestamp, after which the cookies will no longer be valid. Authentication plugins may chose to use this mechanism to limit the lifetime of the cookie. For example, if a plugin implements single-sign-on against another source it may decide to set short-lived cookies so that if the user is removed from the SSO system their existing Datasette cookies will stop working shortly afterwards. To include an expiry, add a ""e"" key to the cookie value containing a base62-encoded integer representing the timestamp when the cookie should expire. For example, here's how to set a cookie that expires after 24 hours: import time from datasette.utils import baseconv expires_at = int(time.time()) + (24 * 60 * 60) response = Response.redirect(""/"") response.set_cookie( ""ds_actor"", datasette.sign( { ""a"": {""id"": ""cleopaws""}, ""e"": baseconv.base62.encode(expires_at), }, ""actor"", ), ) The resulting cookie will encode data that looks something like this: { ""a"": { ""id"": ""cleopaws"" }, ""e"": ""1jjSji"" }","[""Authentication and permissions"", ""The ds_actor cookie""]",[] authentication:authentication-permissions-config,authentication,authentication-permissions-config,Access permissions in ,"There are two ways to configure permissions using datasette.yaml (or datasette.json ). For simple visibility permissions you can use ""allow"" blocks in the root, database, table and query sections. For other permissions you can use a ""permissions"" block, described in the next section . You can limit who is allowed to view different parts of your Datasette instance using ""allow"" keys in your Configuration . You can control the following: Access to the entire Datasette instance Access to specific databases Access to specific tables and views Access to specific Canned queries If a user cannot access a specific database, they will not be able to access tables, views or queries within that database. If a user cannot access the instance they will not be able to access any of the databases, tables, views or queries.","[""Authentication and permissions""]",[] authentication:authentication-permissions-database,authentication,authentication-permissions-database,Access to specific databases,"To limit access to a specific private.db database to just authenticated users, use the ""allow"" block like this: [[[cog config_example(cog, """""" databases: private: allow: id: ""*"" """""") ]]] [[[end]]]","[""Authentication and permissions"", ""Access permissions in ""]",[] authentication:authentication-permissions-explained,authentication,authentication-permissions-explained,How permissions are resolved,"The datasette.permission_allowed(actor, action, resource=None, default=...) method is called to check if an actor is allowed to perform a specific action. This method asks every plugin that implements the permission_allowed(datasette, actor, action, resource) hook if the actor is allowed to perform the action. Each plugin can return True to indicate that the actor is allowed to perform the action, False if they are not allowed and None if the plugin has no opinion on the matter. False acts as a veto - if any plugin returns False then the permission check is denied. Otherwise, if any plugin returns True then the permission check is allowed. The resource argument can be used to specify a specific resource that the action is being performed against. Some permissions, such as view-instance , do not involve a resource. Others such as view-database have a resource that is a string naming the database. Permissions that take both a database name and the name of a table, view or canned query within that database use a resource that is a tuple of two strings, (database_name, resource_name) . Plugins that implement the permission_allowed() hook can decide if they are going to consider the provided resource or not.","[""Authentication and permissions"", ""Permissions""]",[] authentication:authentication-permissions-other,authentication,authentication-permissions-other,Other permissions in ,"For all other permissions, you can use one or more ""permissions"" blocks in your datasette.yaml configuration file. To grant access to the permissions debug tool to all signed in users, you can grant permissions-debug to any actor with an id matching the wildcard * by adding this a the root of your configuration: [[[cog config_example(cog, """""" permissions: debug-menu: id: '*' """""") ]]] [[[end]]] To grant create-table to the user with id of editor for the docs database: [[[cog config_example(cog, """""" databases: docs: permissions: create-table: id: editor """""") ]]] [[[end]]] And for insert-row against the reports table in that docs database: [[[cog config_example(cog, """""" databases: docs: tables: reports: permissions: insert-row: id: editor """""") ]]] [[[end]]] The permissions debug tool can be useful for helping test permissions that you have configured in this way.","[""Authentication and permissions""]",[] authentication:authentication-permissions-query,authentication,authentication-permissions-query,Access to specific canned queries,"Canned queries allow you to configure named SQL queries in your datasette.yaml that can be executed by users. These queries can be set up to both read and write to the database, so controlling who can execute them can be important. To limit access to the add_name canned query in your dogs.db database to just the root user : [[[cog config_example(cog, """""" databases: dogs: queries: add_name: sql: INSERT INTO names (name) VALUES (:name) write: true allow: id: - root """""") ]]] [[[end]]]","[""Authentication and permissions"", ""Access permissions in ""]",[] authentication:createtokenview,authentication,createtokenview,API Tokens,"Datasette includes a default mechanism for generating API tokens that can be used to authenticate requests. Authenticated users can create new API tokens using a form on the /-/create-token page. Tokens created in this way can be further restricted to only allow access to specific actions, or to limit those actions to specific databases, tables or queries. Created tokens can then be passed in the Authorization: Bearer $token header of HTTP requests to Datasette. A token created by a user will include that user's ""id"" in the token payload, so any permissions granted to that user based on their ID can be made available to the token as well. When one of these a token accompanies a request, the actor for that request will have the following shape: { ""id"": ""user_id"", ""token"": ""dstok"", ""token_expires"": 1667717426 } The ""id"" field duplicates the ID of the actor who first created the token. The ""token"" field identifies that this actor was authenticated using a Datasette signed token ( dstok ). The ""token_expires"" field, if present, indicates that the token will expire after that integer timestamp. The /-/create-token page cannot be accessed by actors that are authenticated with a ""token"": ""some-value"" property. This is to prevent API tokens from being used to create more tokens. Datasette plugins that implement their own form of API token authentication should follow this convention. You can disable the signed token feature entirely using the allow_signed_tokens setting.","[""Authentication and permissions""]",[] authentication:id1,authentication,id1,Built-in permissions,"This section lists all of the permission checks that are carried out by Datasette core, along with the resource if it was passed.","[""Authentication and permissions""]",[] authentication:logoutview,authentication,logoutview,The /-/logout page,The page at /-/logout provides the ability to log out of a ds_actor cookie authentication session.,"[""Authentication and permissions"", ""The ds_actor cookie""]",[] authentication:permissions-alter-table,authentication,permissions-alter-table,alter-table,"Actor is allowed to alter a database table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissions-create-table,authentication,permissions-create-table,create-table,"Actor is allowed to create a database table. resource - string The name of the database Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissions-debug-menu,authentication,permissions-debug-menu,debug-menu,"Controls if the various debug pages are displayed in the navigation menu. Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissions-delete-row,authentication,permissions-delete-row,delete-row,"Actor is allowed to delete rows from a table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissions-drop-table,authentication,permissions-drop-table,drop-table,"Actor is allowed to drop a database table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissions-insert-row,authentication,permissions-insert-row,insert-row,"Actor is allowed to insert rows into a table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissions-permissions-debug,authentication,permissions-permissions-debug,permissions-debug,"Actor is allowed to view the /-/permissions debug page. Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissions-plugins,authentication,permissions-plugins,Checking permissions in plugins,"Datasette plugins can check if an actor has permission to perform an action using the datasette.permission_allowed(...) method. Datasette core performs a number of permission checks, documented below . Plugins can implement the permission_allowed(datasette, actor, action, resource) plugin hook to participate in decisions about whether an actor should be able to perform a specified action.","[""Authentication and permissions""]",[] authentication:permissions-update-row,authentication,permissions-update-row,update-row,"Actor is allowed to update rows in a table. resource - tuple: (string, string) The name of the database, then the name of the table Default deny .","[""Authentication and permissions"", ""Built-in permissions""]",[] authentication:permissionsdebugview,authentication,permissionsdebugview,The permissions debug tool,"The debug tool at /-/permissions is only available to the authenticated root user (or any actor granted the permissions-debug action). It shows the thirty most recent permission checks that have been carried out by the Datasette instance. It also provides an interface for running hypothetical permission checks against a hypothetical actor. This is a useful way of confirming that your configured permissions work in the way you expect. This is designed to help administrators and plugin authors understand exactly how permission checks are being carried out, in order to effectively configure Datasette's permission system.","[""Authentication and permissions""]",[] changelog:cookie-methods,changelog,cookie-methods,Cookie methods,"Plugins can now use the new response.set_cookie() method to set cookies. A new request.cookies method on the :ref:internals_request` can be used to read incoming cookies.","[""Changelog"", ""0.44 (2020-06-11)""]",[] changelog:foreign-key-expansions,changelog,foreign-key-expansions,Foreign key expansions,"When Datasette detects a foreign key reference it attempts to resolve a label for that reference (automatically or using the Specifying the label column for a table metadata option) so it can display a link to the associated row. This expansion is now also available for JSON and CSV representations of the table, using the new _labels=on query string option. See Expanding foreign key references for more details.","[""Changelog"", ""0.23 (2018-06-18)""]",[] changelog:id1,changelog,id1,Changelog,,[],[] changelog:id154,changelog,id154,0.17 (2018-04-13),Release 0.17 to fix issues with PyPI,"[""Changelog""]",[] changelog:id208,changelog,id208,0.11 (2017-11-14),"Added datasette publish now --force option. This calls now with --force - useful as it means you get a fresh copy of datasette even if Now has already cached that docker layer. Enable --cors by default when running in a container.","[""Changelog""]",[] changelog:id21,changelog,id21,0.60 (2022-01-13),,"[""Changelog""]",[] changelog:id211,changelog,id211,0.9 (2017-11-13),"Added --sql_time_limit_ms and --extra-options . The serve command now accepts --sql_time_limit_ms for customizing the SQL time limit. The publish and package commands now accept --extra-options which can be used to specify additional options to be passed to the datasite serve command when it executes inside the resulting Docker containers.","[""Changelog""]",[] changelog:id44,changelog,id44,0.51.1 (2020-10-31),Improvements to the new Binary data documentation page.,"[""Changelog""]",[] changelog:id45,changelog,id45,0.51 (2020-10-31),"A new visual design, plugin hooks for adding navigation options, better handling of binary data, URL building utility methods and better support for running Datasette behind a proxy.","[""Changelog""]",[] changelog:id84,changelog,id84,0.29 (2019-07-07),"ASGI, new plugin hooks, facet by date and much, much more...","[""Changelog""]",[] changelog:id87,changelog,id87,0.27.1 (2019-05-09),"Tiny bugfix release: don't install tests/ in the wrong place. Thanks, Veit Heller.","[""Changelog""]",[] changelog:id91,changelog,id91,0.25.2 (2018-12-16),"datasette publish heroku now uses the python-3.6.7 runtime Added documentation on how to build the documentation Added documentation covering our release process Upgraded to pytest 4.0.2","[""Changelog""]",[] changelog:signed-values-and-secrets,changelog,signed-values-and-secrets,Signed values and secrets,"Both flash messages and user authentication needed a way to sign values and set signed cookies. Two new methods are now available for plugins to take advantage of this mechanism: .sign(value, namespace=""default"") and .unsign(value, namespace=""default"") . Datasette will generate a secret automatically when it starts up, but to avoid resetting the secret (and hence invalidating any cookies) every time the server restarts you should set your own secret. You can pass a secret to Datasette using the new --secret option or with a DATASETTE_SECRET environment variable. See Configuring the secret for more details. You can also set a secret when you deploy Datasette using datasette publish or datasette package - see Using secrets with datasette publish . Plugins can now sign values and verify their signatures using the datasette.sign() and datasette.unsign() methods.","[""Changelog"", ""0.44 (2020-06-11)""]",[] changelog:v0-29-medium-changes,changelog,v0-29-medium-changes,Easier custom templates for table rows,"If you want to customize the display of individual table rows, you can do so using a _table.html template include that looks something like this: {% for row in display_rows %}

{{ row[""title""] }}

{{ row[""description""] }}

Category: {{ row.display(""category_id"") }}

{% endfor %} This is a backwards incompatible change . If you previously had a custom template called _rows_and_columns.html you need to rename it to _table.html . See Custom templates for full details.","[""Changelog"", ""0.29 (2019-07-07)""]",[] changelog:v1-0-a9,changelog,v1-0-a9,1.0a9 (2024-02-16),This alpha release adds basic alter table support to the Datasette Write API and fixes a permissions bug relating to the /upsert API endpoint.,"[""Changelog""]",[] cli-reference:cli-datasette-get,cli-reference,cli-datasette-get,datasette --get,"The --get option to datasette serve (or just datasette ) specifies the path to a page within Datasette and causes Datasette to output the content from that path without starting the web server. This means that all of Datasette's functionality can be accessed directly from the command-line. For example: datasette --get '/-/versions.json' | jq . { ""python"": { ""version"": ""3.8.5"", ""full"": ""3.8.5 (default, Jul 21 2020, 10:48:26) \n[Clang 11.0.3 (clang-1103.0.32.62)]"" }, ""datasette"": { ""version"": ""0.46+15.g222a84a.dirty"" }, ""asgi"": ""3.0"", ""uvicorn"": ""0.11.8"", ""sqlite"": { ""version"": ""3.32.3"", ""fts_versions"": [ ""FTS5"", ""FTS4"", ""FTS3"" ], ""extensions"": { ""json1"": null }, ""compile_options"": [ ""COMPILER=clang-11.0.3"", ""ENABLE_COLUMN_METADATA"", ""ENABLE_FTS3"", ""ENABLE_FTS3_PARENTHESIS"", ""ENABLE_FTS4"", ""ENABLE_FTS5"", ""ENABLE_GEOPOLY"", ""ENABLE_JSON1"", ""ENABLE_PREUPDATE_HOOK"", ""ENABLE_RTREE"", ""ENABLE_SESSION"", ""MAX_VARIABLE_NUMBER=250000"", ""THREADSAFE=1"" ] } } You can use the --token TOKEN option to send an API token with the simulated request. Or you can make a request as a specific actor by passing a JSON representation of that actor to --actor : datasette --memory --actor '{""id"": ""root""}' --get '/-/actor.json' The exit code of datasette --get will be 0 if the request succeeds and 1 if the request produced an HTTP status code other than 200 - e.g. a 404 or 500 error. This lets you use datasette --get / to run tests against a Datasette application in a continuous integration environment such as GitHub Actions.","[""CLI reference"", ""datasette serve""]",[] cli-reference:cli-help-create-token-help,cli-reference,cli-help-create-token-help,datasette create-token,"Create a signed API token, see datasette create-token . [[[cog help([""create-token"", ""--help""]) ]]] Usage: datasette create-token [OPTIONS] ID Create a signed API token for the specified actor ID Example: datasette create-token root --secret mysecret To allow only ""view-database-download"" for all databases: datasette create-token root --secret mysecret \ --all view-database-download To allow ""create-table"" against a specific database: datasette create-token root --secret mysecret \ --database mydb create-table To allow ""insert-row"" against a specific table: datasette create-token root --secret myscret \ --resource mydb mytable insert-row Restricted actions can be specified multiple times using multiple --all, --database, and --resource options. Add --debug to see a decoded version of the token. Options: --secret TEXT Secret used for signing the API tokens [required] -e, --expires-after INTEGER Token should expire after this many seconds -a, --all ACTION Restrict token to this action -d, --database DB ACTION Restrict token to this action on this database -r, --resource DB RESOURCE ACTION Restrict token to this action on this database resource (a table, SQL view or named query) --debug Show decoded token --plugins-dir DIRECTORY Path to directory containing custom plugins --help Show this message and exit. [[[end]]]","[""CLI reference""]",[] cli-reference:cli-help-help,cli-reference,cli-help-help,datasette --help,"Running datasette --help shows a list of all of the available commands. [[[cog help([""--help""]) ]]] Usage: datasette [OPTIONS] COMMAND [ARGS]... Datasette is an open source multi-tool for exploring and publishing data About Datasette: https://datasette.io/ Full documentation: https://docs.datasette.io/ Options: --version Show the version and exit. --help Show this message and exit. Commands: serve* Serve up specified SQLite database files with a web UI create-token Create a signed API token for the specified actor ID inspect Generate JSON summary of provided database files install Install plugins and packages from PyPI into the same... package Package SQLite files into a Datasette Docker container plugins List currently installed plugins publish Publish specified SQLite database files to the internet... uninstall Uninstall plugins and Python packages from the Datasette... [[[end]]] Additional commands added by plugins that use the register_commands(cli) hook will be listed here as well.","[""CLI reference""]",[] cli-reference:cli-help-inspect-help,cli-reference,cli-help-inspect-help,datasette inspect,"Outputs JSON representing introspected data about one or more SQLite database files. If you are opening an immutable database, you can pass this file to the --inspect-data option to improve Datasette's performance by allowing it to skip running row counts against the database when it first starts running: datasette inspect mydatabase.db > inspect-data.json datasette serve -i mydatabase.db --inspect-file inspect-data.json This performance optimization is used automatically by some of the datasette publish commands. You are unlikely to need to apply this optimization manually. [[[cog help([""inspect"", ""--help""]) ]]] Usage: datasette inspect [OPTIONS] [FILES]... Generate JSON summary of provided database files This can then be passed to ""datasette --inspect-file"" to speed up count operations against immutable database files. Options: --inspect-file TEXT --load-extension PATH:ENTRYPOINT? Path to a SQLite extension to load, and optional entrypoint --help Show this message and exit. [[[end]]]","[""CLI reference""]",[] cli-reference:cli-help-package-help,cli-reference,cli-help-package-help,datasette package,"Package SQLite files into a Datasette Docker container, see datasette package . [[[cog help([""package"", ""--help""]) ]]] Usage: datasette package [OPTIONS] FILES... Package SQLite files into a Datasette Docker container Options: -t, --tag TEXT Name for the resulting Docker container, can optionally use name:tag format -m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish --extra-options TEXT Extra options to pass to datasette serve --branch TEXT Install datasette from a GitHub branch e.g. main --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --install TEXT Additional packages (e.g. plugins) to install --spatialite Enable SpatialLite extension --version-note TEXT Additional note to show on /-/versions --secret TEXT Secret used for signing secure values, such as signed cookies -p, --port INTEGER RANGE Port to run the server on, defaults to 8001 [1<=x<=65535] --title TEXT Title for metadata --license TEXT License label for metadata --license_url TEXT License URL for metadata --source TEXT Source label for metadata --source_url TEXT Source URL for metadata --about TEXT About label for metadata --about_url TEXT About URL for metadata --help Show this message and exit. [[[end]]]","[""CLI reference""]",[] cli-reference:cli-help-plugins-help,cli-reference,cli-help-plugins-help,datasette plugins,"Output JSON showing all currently installed plugins, their versions, whether they include static files or templates and which Plugin hooks they use. [[[cog help([""plugins"", ""--help""]) ]]] Usage: datasette plugins [OPTIONS] List currently installed plugins Options: --all Include built-in default plugins --requirements Output requirements.txt of installed plugins --plugins-dir DIRECTORY Path to directory containing custom plugins --help Show this message and exit. [[[end]]] Example output: [ { ""name"": ""datasette-geojson"", ""static"": false, ""templates"": false, ""version"": ""0.3.1"", ""hooks"": [ ""register_output_renderer"" ] }, { ""name"": ""datasette-geojson-map"", ""static"": true, ""templates"": false, ""version"": ""0.4.0"", ""hooks"": [ ""extra_body_script"", ""extra_css_urls"", ""extra_js_urls"" ] }, { ""name"": ""datasette-leaflet"", ""static"": true, ""templates"": false, ""version"": ""0.2.2"", ""hooks"": [ ""extra_body_script"", ""extra_template_vars"" ] } ]","[""CLI reference""]",[] cli-reference:cli-help-publish-cloudrun-help,cli-reference,cli-help-publish-cloudrun-help,datasette publish cloudrun,"See Publishing to Google Cloud Run . [[[cog help([""publish"", ""cloudrun"", ""--help""]) ]]] Usage: datasette publish cloudrun [OPTIONS] [FILES]... Publish databases to Datasette running on Cloud Run Options: -m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish --extra-options TEXT Extra options to pass to datasette serve --branch TEXT Install datasette from a GitHub branch e.g. main --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --install TEXT Additional packages (e.g. plugins) to install --plugin-secret ... Secrets to pass to plugins, e.g. --plugin- secret datasette-auth-github client_id xxx --version-note TEXT Additional note to show on /-/versions --secret TEXT Secret used for signing secure values, such as signed cookies --title TEXT Title for metadata --license TEXT License label for metadata --license_url TEXT License URL for metadata --source TEXT Source label for metadata --source_url TEXT Source URL for metadata --about TEXT About label for metadata --about_url TEXT About URL for metadata -n, --name TEXT Application name to use when building --service TEXT Cloud Run service to deploy (or over-write) --spatialite Enable SpatialLite extension --show-files Output the generated Dockerfile and metadata.json --memory TEXT Memory to allocate in Cloud Run, e.g. 1Gi --cpu [1|2|4] Number of vCPUs to allocate in Cloud Run --timeout INTEGER Build timeout in seconds --apt-get-install TEXT Additional packages to apt-get install --max-instances INTEGER Maximum Cloud Run instances --min-instances INTEGER Minimum Cloud Run instances --help Show this message and exit. [[[end]]]","[""CLI reference""]",[] cli-reference:cli-help-publish-help,cli-reference,cli-help-publish-help,datasette publish,"Shows a list of available deployment targets for publishing data with Datasette. Additional deployment targets can be added by plugins that use the publish_subcommand(publish) hook. [[[cog help([""publish"", ""--help""]) ]]] Usage: datasette publish [OPTIONS] COMMAND [ARGS]... Publish specified SQLite database files to the internet along with a Datasette-powered interface and API Options: --help Show this message and exit. Commands: cloudrun Publish databases to Datasette running on Cloud Run heroku Publish databases to Datasette running on Heroku [[[end]]]","[""CLI reference""]",[] cli-reference:cli-help-publish-heroku-help,cli-reference,cli-help-publish-heroku-help,datasette publish heroku,"See Publishing to Heroku . [[[cog help([""publish"", ""heroku"", ""--help""]) ]]] Usage: datasette publish heroku [OPTIONS] [FILES]... Publish databases to Datasette running on Heroku Options: -m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish --extra-options TEXT Extra options to pass to datasette serve --branch TEXT Install datasette from a GitHub branch e.g. main --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --install TEXT Additional packages (e.g. plugins) to install --plugin-secret ... Secrets to pass to plugins, e.g. --plugin- secret datasette-auth-github client_id xxx --version-note TEXT Additional note to show on /-/versions --secret TEXT Secret used for signing secure values, such as signed cookies --title TEXT Title for metadata --license TEXT License label for metadata --license_url TEXT License URL for metadata --source TEXT Source label for metadata --source_url TEXT Source URL for metadata --about TEXT About label for metadata --about_url TEXT About URL for metadata -n, --name TEXT Application name to use when deploying --tar TEXT --tar option to pass to Heroku, e.g. --tar=/usr/local/bin/gtar --generate-dir DIRECTORY Output generated application files and stop without deploying --help Show this message and exit. [[[end]]]","[""CLI reference""]",[] cli-reference:cli-help-serve-help,cli-reference,cli-help-serve-help,datasette serve,"This command starts the Datasette web application running on your machine: datasette serve mydatabase.db Or since this is the default command you can run this instead: datasette mydatabase.db Once started you can access it at http://localhost:8001 [[[cog help([""serve"", ""--help""]) ]]] Usage: datasette serve [OPTIONS] [FILES]... Serve up specified SQLite database files with a web UI Options: -i, --immutable PATH Database files to open in immutable mode -h, --host TEXT Host for server. Defaults to 127.0.0.1 which means only connections from the local machine will be allowed. Use 0.0.0.0 to listen to all IPs and allow access from other machines. -p, --port INTEGER RANGE Port for server, defaults to 8001. Use -p 0 to automatically assign an available port. [0<=x<=65535] --uds TEXT Bind to a Unix domain socket --reload Automatically reload if code or metadata change detected - useful for development --cors Enable CORS by serving Access-Control-Allow- Origin: * --load-extension PATH:ENTRYPOINT? Path to a SQLite extension to load, and optional entrypoint --inspect-file TEXT Path to JSON file created using ""datasette inspect"" -m, --metadata FILENAME Path to JSON/YAML file containing license/source metadata --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --memory Make /_memory database available -c, --config FILENAME Path to JSON/YAML Datasette configuration file -s, --setting SETTING... nested.key, value setting to use in Datasette configuration --secret TEXT Secret used for signing secure values, such as signed cookies --root Output URL that sets a cookie authenticating the root user --get TEXT Run an HTTP GET request against this path, print results and exit --token TEXT API token to send with --get requests --actor TEXT Actor to use for --get requests (JSON string) --version-note TEXT Additional note to show on /-/versions --help-settings Show available settings --pdb Launch debugger on any errors -o, --open Open Datasette in your web browser --create Create database files if they do not exist --crossdb Enable cross-database joins using the /_memory database --nolock Ignore locking, open locked files in read-only mode --ssl-keyfile TEXT SSL key file --ssl-certfile TEXT SSL certificate file --internal PATH Path to a persistent Datasette internal SQLite database --help Show this message and exit. [[[end]]]","[""CLI reference""]",[] cli-reference:cli-help-serve-help-settings,cli-reference,cli-help-serve-help-settings,datasette serve --help-settings,"This command outputs all of the available Datasette settings . These can be passed to datasette serve using datasette serve --setting name value . [[[cog help([""--help-settings""]) ]]] Settings: default_page_size Default page size for the table view (default=100) max_returned_rows Maximum rows that can be returned from a table or custom query (default=1000) max_insert_rows Maximum rows that can be inserted at a time using the bulk insert API (default=100) num_sql_threads Number of threads in the thread pool for executing SQLite queries (default=3) sql_time_limit_ms Time limit for a SQL query in milliseconds (default=1000) default_facet_size Number of values to return for requested facets (default=30) facet_time_limit_ms Time limit for calculating a requested facet (default=200) facet_suggest_time_limit_ms Time limit for calculating a suggested facet (default=50) allow_facet Allow users to specify columns to facet using ?_facet= parameter (default=True) allow_download Allow users to download the original SQLite database files (default=True) allow_signed_tokens Allow users to create and use signed API tokens (default=True) default_allow_sql Allow anyone to run arbitrary SQL queries (default=True) max_signed_tokens_ttl Maximum allowed expiry time for signed API tokens (default=0) suggest_facets Calculate and display suggested facets (default=True) default_cache_ttl Default HTTP cache TTL (used in Cache-Control: max-age= header) (default=5) cache_size_kb SQLite cache size in KB (0 == use SQLite default) (default=0) allow_csv_stream Allow .csv?_stream=1 to download all rows (ignoring max_returned_rows) (default=True) max_csv_mb Maximum size allowed for CSV export in MB - set 0 to disable this limit (default=100) truncate_cells_html Truncate cells longer than this in HTML table view - set 0 to disable (default=2048) force_https_urls Force URLs in API output to always use https:// protocol (default=False) template_debug Allow display of template debug information with ?_context=1 (default=False) trace_debug Allow display of SQL trace debug information with ?_trace=1 (default=False) base_url Datasette URLs should use this base path (default=/) [[[end]]]","[""CLI reference"", ""datasette serve""]",[] cli-reference:cli-help-uninstall-help,cli-reference,cli-help-uninstall-help,datasette uninstall,"Uninstall one or more plugins. [[[cog help([""uninstall"", ""--help""]) ]]] Usage: datasette uninstall [OPTIONS] PACKAGES... Uninstall plugins and Python packages from the Datasette environment Options: -y, --yes Don't ask for confirmation --help Show this message and exit. [[[end]]]","[""CLI reference""]",[] cli-reference:id1,cli-reference,id1,CLI reference,"The datasette CLI tool provides a number of commands. Running datasette without specifying a command runs the default command, datasette serve . See datasette serve for the full list of options for that command. [[[cog from datasette import cli from click.testing import CliRunner import textwrap def help(args): title = ""datasette "" + "" "".join(args) cog.out(""\n::\n\n"") result = CliRunner().invoke(cli.cli, args) output = result.output.replace(""Usage: cli "", ""Usage: datasette "") cog.out(textwrap.indent(output, ' ')) cog.out(""\n\n"") ]]] [[[end]]]",[],[] configuration:configuration-reference,configuration,configuration-reference,,"The following example shows some of the valid configuration options that can exist inside datasette.yaml . [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """""" # Datasette settings block settings: default_page_size: 50 sql_time_limit_ms: 3500 max_returned_rows: 2000 # top-level plugin configuration plugins: datasette-my-plugin: key: valueA # Database and table-level configuration databases: your_db_name: # plugin configuration for the your_db_name database plugins: datasette-my-plugin: key: valueA tables: your_table_name: allow: # Only the root user can access this table id: root # plugin configuration for the your_table_name table # inside your_db_name database plugins: datasette-my-plugin: key: valueB """""") ) ]]] [[[end]]]","[""Configuration""]",[] configuration:configuration-reference-canned-queries,configuration,configuration-reference-canned-queries,Canned queries configuration,"Canned queries are named SQL queries that appear in the Datasette interface. They can be configured in datasette.yaml using the queries key at the database level: [[[cog from metadata_doc import config_example, config_example config_example(cog, { ""databases"": { ""sf-trees"": { ""queries"": { ""just_species"": { ""sql"": ""select qSpecies from Street_Tree_List"" } } } } }) ]]] [[[end]]] See the canned queries documentation for more, including how to configure writable canned queries .","[""Configuration"", null]",[] configuration:configuration-reference-permissions,configuration,configuration-reference-permissions,Permissions configuration,"Datasette's authentication and permissions system can also be configured using datasette.yaml . Here is a simple example: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """""" # Instance is only available to users 'sharon' and 'percy': allow: id: - sharon - percy # Only 'percy' is allowed access to the accounting database: databases: accounting: allow: id: percy """""").strip() ) ]]] [[[end]]] Access permissions in datasette.yaml has the full details.","[""Configuration"", null]",[] configuration:configuration-reference-plugins,configuration,configuration-reference-plugins,Plugin configuration,"Datasette plugins often require configuration. This plugin configuration should be placed in plugins keys inside datasette.yaml . Most plugins are configured at the top-level of the file, using the plugins key: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """""" # inside datasette.yaml plugins: datasette-my-plugin: key: my_value """""").strip() ) ]]] [[[end]]] Some plugins can be configured at the database or table level. These should use a plugins key nested under the appropriate place within the databases object: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """""" # inside datasette.yaml databases: my_database: # plugin configuration for the my_database database plugins: datasette-my-plugin: key: my_value my_other_database: tables: my_table: # plugin configuration for the my_table table inside the my_other_database database plugins: datasette-my-plugin: key: my_value """""").strip() ) ]]] [[[end]]]","[""Configuration"", null]",[] configuration:configuration-reference-settings,configuration,configuration-reference-settings,Settings,"Settings can be configured in datasette.yaml with the settings key: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """""" # inside datasette.yaml settings: default_allow_sql: off default_page_size: 50 """""").strip() ) ]]] [[[end]]] The full list of settings is available in the settings documentation . Settings can also be passed to Datasette using one or more --setting name value command line options.`","[""Configuration"", null]",[] configuration:id1,configuration,id1,Configuration,"Datasette offers several ways to configure your Datasette instances: server settings, plugin configuration, authentication, and more. Most configuration can be handled using a datasette.yaml configuration file, passed to datasette using the -c/--config flag: datasette mydatabase.db --config datasette.yaml This file can also use JSON, as datasette.json . YAML is recommended over JSON due to its support for comments and multi-line strings.",[],[] contributing:contributing-debugging,contributing,contributing-debugging,Debugging,"Any errors that occur while Datasette is running while display a stack trace on the console. You can tell Datasette to open an interactive pdb debugger session if an error occurs using the --pdb option: datasette --pdb fixtures.db","[""Contributing""]",[] contributing:contributing-formatting-black,contributing,contributing-formatting-black,Running Black,"Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: black . --check All done! ✨ 🍰 ✨ 95 files would be left unchanged. If any of your code does not conform to Black you can run this to automatically fix those problems: black . reformatted ../datasette/setup.py All done! ✨ 🍰 ✨ 1 file reformatted, 94 files left unchanged.","[""Contributing"", ""Code formatting""]",[] contributing:contributing-using-fixtures,contributing,contributing-using-fixtures,Using fixtures,"To run Datasette itself, type datasette . You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. You can create a copy of that database by running this command: python tests/fixtures.py fixtures.db Now you can run Datasette against the new fixtures database like so: datasette fixtures.db This will start a server at http://127.0.0.1:8001/ . Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: datasette --reload fixtures.db You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: python tests/fixtures.py fixtures.db fixtures-metadata.json Or to output the plugins used by the tests, run this: python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins Test tables written to fixtures.db - metadata written to fixtures-metadata.json Wrote plugin: fixtures-plugins/register_output_renderer.py Wrote plugin: fixtures-plugins/view_name.py Wrote plugin: fixtures-plugins/my_plugin.py Wrote plugin: fixtures-plugins/messages_output_renderer.py Wrote plugin: fixtures-plugins/my_plugin_2.py Then run Datasette like this: datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/","[""Contributing""]",[] contributing:general-guidelines,contributing,general-guidelines,General guidelines,"main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them.","[""Contributing""]",[] contributing:id1,contributing,id1,Contributing,"Datasette is an open source project. We welcome contributions! This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins .",[],[] csv_export:csv-export-url-parameters,csv_export,csv-export-url-parameters,URL parameters,"The following options can be used to customize the CSVs returned by Datasette. ?_header=off This removes the first row of the CSV file specifying the headings - only the row data will be returned. ?_stream=on Stream all matching records, not just the first page of results. See below. ?_dl=on Causes Datasette to return a content-disposition: attachment; filename=""filename.csv"" header.","[""CSV export""]",[] csv_export:streaming-all-records,csv_export,streaming-all-records,Streaming all records,"The stream all rows option is designed to be as efficient as possible - under the hood it takes advantage of Python 3 asyncio capabilities and Datasette's efficient pagination to stream back the full CSV file. Since databases can get pretty large, by default this option is capped at 100MB - if a table returns more than 100MB of data the last line of the CSV will be a truncation error message. You can increase or remove this limit using the max_csv_mb config setting. You can also disable the CSV export feature entirely using allow_csv_stream .","[""CSV export""]",[] custom_templates:css-classes-on-the-body,custom_templates,css-classes-on-the-body,CSS classes on the ,"Every default template includes CSS classes in the body designed to support custom styling. The index template (the top level page at / ) gets this: The database template ( /dbname ) gets this: The custom SQL template ( /dbname?sql=... ) gets this: A canned query template ( /dbname/queryname ) gets this: The table template ( /dbname/tablename ) gets: The row template ( /dbname/tablename/rowid ) gets: The db-x and table-x classes use the database or table names themselves if they are valid CSS identifiers. If they aren't, we strip any invalid characters out and append a 6 character md5 digest of the original name, in order to ensure that multiple tables which resolve to the same stripped character version still have different CSS classes. Some examples: ""simple"" => ""simple"" ""MixedCase"" => ""MixedCase"" ""-no-leading-hyphens"" => ""no-leading-hyphens-65bea6"" ""_no-leading-underscores"" => ""no-leading-underscores-b921bc"" ""no spaces"" => ""no-spaces-7088d7"" ""-"" => ""336d5e"" ""no $ characters"" => ""no--characters-59e024"" and elements also get custom CSS classes reflecting the database column they are representing, for example:
id name
1 SMITH
","[""Custom pages and templates""]",[] custom_templates:custom-pages-404,custom_templates,custom-pages-404,Returning 404s,"To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function: {% if not rows %} {{ raise_404(""Content not found"") }} {% endif %} If you call raise_404() the other content in your template will be ignored.","[""Custom pages and templates""]",[] custom_templates:custom-pages-headers,custom_templates,custom-pages-headers,Custom headers and status codes,"Custom pages default to being served with a content-type of text/html; charset=utf-8 and a 200 status code. You can change these by calling a custom function from within your template. For example, to serve a custom page with a 418 I'm a teapot HTTP status code, create a file in pages/teapot.html containing the following: {{ custom_status(418) }} Teapot I'm a teapot To serve a custom HTTP header, add a custom_header(name, value) function call. For example: {{ custom_status(418) }} {{ custom_header(""x-teapot"", ""I am"") }} Teapot I'm a teapot You can verify this is working using curl like this: curl -I 'http://127.0.0.1:8001/teapot' HTTP/1.1 418 date: Sun, 26 Apr 2020 18:38:30 GMT server: uvicorn x-teapot: I am content-type: text/html; charset=utf-8","[""Custom pages and templates""]",[] custom_templates:custom-pages-redirects,custom_templates,custom-pages-redirects,Custom redirects,"You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html : {{ custom_redirect(""https://github.com/simonw/datasette"") }} Now requests to http://localhost:8001/datasette will result in a redirect. These redirects are served with a 302 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function: {{ custom_redirect(""https://github.com/simonw/datasette"", 301) }}","[""Custom pages and templates""]",[] custom_templates:customization,custom_templates,customization,Custom pages and templates,Datasette provides a number of ways of customizing the way data is displayed.,[],[] custom_templates:customization-static-files,custom_templates,customization-static-files,Serving static files,"Datasette can serve static files for you, using the --static option. Consider the following directory structure: metadata.json static-files/styles.css static-files/app.js You can start Datasette using --static assets:static-files/ to serve those files from the /assets/ mount point: datasette --config datasette.yaml --static assets:static-files/ --memory The following URLs will now serve the content from those CSS and JS files: http://localhost:8001/assets/styles.css http://localhost:8001/assets/app.js You can reference those files from datasette.yaml like this, see custom CSS and JavaScript for more details: [[[cog from metadata_doc import config_example config_example(cog, """""" extra_css_urls: - /assets/styles.css extra_js_urls: - /assets/app.js """""") ]]] [[[end]]]","[""Custom pages and templates""]",[] custom_templates:id1,custom_templates,id1,Custom pages,"You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory. For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this: datasette mydb.db --template-dir=templates/ You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html .","[""Custom pages and templates"", ""Publishing static assets""]",[] custom_templates:publishing-static-assets,custom_templates,publishing-static-assets,Publishing static assets,"The datasette publish command can be used to publish your static assets, using the same syntax as above: datasette publish cloudrun mydb.db --static assets:static-files/ This will upload the contents of the static-files/ directory as part of the deployment, and configure Datasette to correctly serve the assets from /assets/ .","[""Custom pages and templates""]",[] deploying:deploying,deploying,deploying,Deploying Datasette,"The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. You can deploy Datasette to other hosting providers using the instructions on this page.",[],[] deploying:deploying-fundamentals,deploying,deploying-fundamentals,Deployment fundamentals,"Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000. If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette.","[""Deploying Datasette""]",[] deploying:deploying-proxy,deploying,deploying-proxy,Running Datasette behind a proxy,"You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: datasette my-database.db --setting base_url /my-datasette/ -p 8009 This will run Datasette with the following URLs: http://127.0.0.1:8009/my-datasette/ - the Datasette homepage http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance.","[""Deploying Datasette""]",[] deploying:deploying-systemd,deploying,deploying-systemd,Running Datasette using systemd,"You can run Datasette on Ubuntu or Debian systems using systemd . First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . You can copy a test database into that folder like so: cd /home/ubuntu/datasette-root curl -O https://latest.datasette.io/fixtures.db Create a file at /etc/systemd/system/datasette.service with the following contents: [Unit] Description=Datasette After=network.target [Service] Type=simple User=ubuntu Environment=DATASETTE_SECRET= WorkingDirectory=/home/ubuntu/datasette-root ExecStart=datasette serve . -h 127.0.0.1 -p 8000 Restart=on-failure [Install] WantedBy=multi-user.target Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: python3 -c 'import secrets; print(secrets.token_hex(32))' This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. You can start the Datasette process running using the following: sudo systemctl daemon-reload sudo systemctl start datasette.service You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: sudo systemctl restart datasette.service Once the service has started you can confirm that Datasette is running on port 8000 like so: curl 127.0.0.1:8000/-/versions.json # Should output JSON showing the installed version Datasette will not be accessible from outside the server because it is listening on 127.0.0.1 . You can expose it by instead listening on 0.0.0.0 , but a better way is to set up a proxy such as nginx - see Running Datasette behind a proxy .","[""Deploying Datasette""]",[] events:id1,events,id1,Events,"Datasette includes a mechanism for tracking events that occur while the software is running. This is primarily intended to be used by plugins, which can both trigger events and listen for events. The core Datasette application triggers events when certain things happen. This page describes those events. Plugins can listen for events using the track_event(datasette, event) plugin hook, which will be called with instances of the following classes - or additional classes registered by other plugins . class datasette.events. LoginEvent actor : dict | None Event name: login A user (represented by event.actor ) has logged in. class datasette.events. LogoutEvent actor : dict | None Event name: logout A user (represented by event.actor ) has logged out. class datasette.events. CreateTokenEvent actor : dict | None expires_after : int | None restrict_all : list restrict_database : dict restrict_resource : dict Event name: create-token A user created an API token. Variables expires_after -- Number of seconds after which this token will expire. restrict_all -- Restricted permissions for this token. restrict_database -- Restricted database permissions for this token. restrict_resource -- Restricted resource permissions for this token. class datasette.events. CreateTableEvent actor : dict | None database : str table : str schema : str Event name: create-table A new table has been created in the database. Variables database -- The name of the database where the table was created. table -- The name of the table that was created schema -- The SQL schema definition for the new table. class datasette.events. DropTableEvent actor : dict | None database : str table : str Event name: drop-table A table has been dropped from the database. Variables database -- The name of the database where the table was dropped. table -- The name of the table that was dropped class datasette.events. AlterTableEvent actor : dict | None database : str table : str before_schema : str after_schema : str Event name: alter-table A table has been altered. Variables database -- The name of the database where the table was altered table -- The name of the table that was altered before_schema -- The table's SQL schema before the alteration after_schema -- The table's SQL schema after the alteration class datasette.events. InsertRowsEvent actor : dict | None database : str table : str num_rows : int ignore : bool replace : bool Event name: insert-rows Rows were inserted into a table. Variables database -- The name of the database where the rows were inserted. table -- The name of the table where the rows were inserted. num_rows -- The number of rows that were requested to be inserted. ignore -- Was ignore set? replace -- Was replace set? class datasette.events. UpsertRowsEvent actor : dict | None database : str table : str num_rows : int Event name: upsert-rows Rows were upserted into a table. Variables database -- The name of the database where the rows were inserted. table -- The name of the table where the rows were inserted. num_rows -- The number of rows that were requested to be inserted. class datasette.events. UpdateRowEvent actor : dict | None database : str table : str pks : list Event name: update-row A row was updated in a table. Variables database -- The name of the database where the row was updated. table -- The name of the table where the row was updated. pks -- The primary key values of the updated row. class datasette.events. DeleteRowEvent actor : dict | None database : str table : str pks : list Event name: delete-row A row was deleted from a table. Variables database -- The name of the database where the row was deleted. table -- The name of the table where the row was deleted. pks -- The primary key values of the deleted row.",[],[] facets:facets-in-query-strings,facets,facets-in-query-strings,Facets in query strings,"To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL. For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this: /dbname/tablename?_facet=state&_facet=city_id This works for both the HTML interface and the .json view. When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this: { ""state"": { ""name"": ""state"", ""results"": [ { ""value"": ""CA"", ""label"": ""CA"", ""count"": 10, ""toggle_url"": ""http://...?_facet=city_id&_facet=state&state=CA"", ""selected"": false }, { ""value"": ""MI"", ""label"": ""MI"", ""count"": 4, ""toggle_url"": ""http://...?_facet=city_id&_facet=state&state=MI"", ""selected"": false }, { ""value"": ""MC"", ""label"": ""MC"", ""count"": 1, ""toggle_url"": ""http://...?_facet=city_id&_facet=state&state=MC"", ""selected"": false } ], ""truncated"": false } ""city_id"": { ""name"": ""city_id"", ""results"": [ { ""value"": 1, ""label"": ""San Francisco"", ""count"": 6, ""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=1"", ""selected"": false }, { ""value"": 2, ""label"": ""Los Angeles"", ""count"": 4, ""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=2"", ""selected"": false }, { ""value"": 3, ""label"": ""Detroit"", ""count"": 4, ""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=3"", ""selected"": false }, { ""value"": 4, ""label"": ""Memnonia"", ""count"": 1, ""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=4"", ""selected"": false } ], ""truncated"": false } } If Datasette detects that a column is a foreign key, the ""label"" property will be automatically derived from the detected label column on the referenced table. The default number of facet results returned is 30, controlled by the default_facet_size setting. You can increase this on an individual page by adding ?_facet_size=100 to the query string, up to a maximum of max_returned_rows (which defaults to 1000).","[""Facets""]",[] facets:facets-metadata,facets,facets-metadata,Facets in metadata,"You can turn facets on by default for specific tables by adding them to a ""facets"" key in a Datasette Metadata file. Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: [[[cog from metadata_doc import metadata_example metadata_example(cog, { ""databases"": { ""sf-trees"": { ""tables"": { ""Street_Tree_List"": { ""facets"": [""qLegalStatus""] } } } } }) ]]] [[[end]]] Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this: [[[cog metadata_example(cog, { ""facets"": [ {""array"": ""tags""}, {""date"": ""created""} ] }) ]]] [[[end]]] You can change the default facet size (the number of results shown for each facet) for a table using facet_size : [[[cog metadata_example(cog, { ""databases"": { ""sf-trees"": { ""tables"": { ""Street_Tree_List"": { ""facets"": [""qLegalStatus""], ""facet_size"": 10 } } } } }) ]]] [[[end]]]","[""Facets""]",[] facets:suggested-facets,facets,suggested-facets,Suggested facets,"Datasette's table UI will suggest facets for the user to apply, based on the following criteria: For the currently filtered data are there any columns which, if applied as a facet... Will return 30 or less unique options Will return more than one unique option Will return less unique options than the total number of filtered rows And the query used to evaluate this criteria can be completed in under 50ms That last point is particularly important: Datasette runs a query for every column that is displayed on a page, which could get expensive - so to avoid slow load times it sets a time limit of just 50ms for each of those queries. This means suggested facets are unlikely to appear for tables with millions of records in them.","[""Facets""]",[] full_text_search:full-text-search-fts-versions,full_text_search,full-text-search-fts-versions,FTS versions,"There are three different versions of the SQLite FTS module: FTS3, FTS4 and FTS5. You can tell which versions are supported by your instance of Datasette by checking the /-/versions page. FTS5 is the most advanced module but may not be available in the SQLite version that is bundled with your Python installation. Most importantly, FTS5 is the only version that has the ability to order by search relevance without needing extra code. If you can't be sure that FTS5 will be available, you should use FTS4.","[""Full-text search""]",[] getting_started:getting-started,getting_started,getting-started,Getting started,,[],[] index:contents,index,contents,Contents,"Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions Configuration Configuration via the command-line datasette.yaml reference Settings Plugin configuration Permissions configuration Canned queries configuration Custom CSS and JavaScript The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect datasette create-token Pages and API endpoints Top-level index Database Hidden tables Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Default representation Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Enabling CORS The JSON write API Inserting rows Upserting rows Updating a row Deleting a row Creating a table Creating a table from example data Dropping tables Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the ""root"" actor Permissions How permissions are resolved Defining permissions with ""allow"" blocks The /-/allow-debug tool Access permissions in datasette.yaml Access to an instance Access to specific databases Access to specific tables and views Access to specific canned queries Controlling the ability to execute arbitrary SQL Other permissions in datasette.yaml API Tokens datasette create-token Checking permissions in plugins actor_matches_allow() The permissions debug tool The ds_actor cookie Including an expiry time The /-/logout page Built-in permissions view-instance view-database view-database-download view-table view-query insert-row delete-row update-row create-table alter-table drop-table execute-sql permissions-debug debug-menu Performance and caching Immutable mode Using ""datasette inspect"" HTTP caching datasette-hashed-urls CSV export URL parameters Streaming all records Binary data Linking to binary downloads Binary plugins Facets Facets in query strings Facets in metadata Suggested facets Speeding up facets with indexes Facet by JSON array Facet by date Full-text search The table page and table view API Advanced SQLite search queries Configuring full-text search for a table or view Searches using custom SQL Enabling full-text search for a SQLite table Configuring FTS using sqlite-utils Configuring FTS using csvs-to-sqlite Configuring FTS by hand FTS versions SpatiaLite Warning Installation Installing SpatiaLite on OS X Installing SpatiaLite on Linux Spatial indexing latitude/longitude columns Making use of a spatial index Importing shapefiles into SpatiaLite Importing GeoJSON polygons using Shapely Querying polygons using within() Metadata Per-database and per-table metadata Source, license and about Column descriptions Specifying units for a column Setting a default sort order Setting a custom page size Setting which columns can be used for sorting Specifying the label column for a table Hiding tables Metadata reference Top-level metadata Database-level metadata Table-level metadata Settings Using --setting Configuration directory mode Settings default_allow_sql default_page_size sql_time_limit_ms max_returned_rows max_insert_rows num_sql_threads allow_facet default_facet_size facet_time_limit_ms facet_suggest_time_limit_ms suggest_facets allow_download allow_signed_tokens max_signed_tokens_ttl default_cache_ttl cache_size_kb allow_csv_stream max_csv_mb truncate_cells_html force_https_urls template_debug trace_debug base_url Configuring the secret Using secrets with datasette publish Introspection /-/metadata /-/versions /-/plugins /-/settings /-/config /-/databases /-/threads /-/actor /-/messages Custom pages and templates CSS classes on the Serving static files Publishing static assets Custom templates Custom pages Path parameters for pages Custom headers and status codes Returning 404s Custom redirects Custom error pages Plugins Installing plugins One-off plugins using --plugins-dir Deploying plugins using datasette publish Controlling which plugins are loaded Seeing what plugins are installed Plugin configuration Secret configuration values Writing plugins Tracing plugin hooks Writing one-off plugins Starting an installable plugin using cookiecutter Packaging a plugin Static assets Custom templates Writing plugins that accept configuration Designing URLs for your plugin Building URLs within plugins Plugins that define new plugin hooks JavaScript plugins The datasette_init event datasetteManager JavaScript plugin objects makeAboveTablePanelConfigs() makeColumnActions(columnDetails) Selectors Plugin hooks prepare_connection(conn, database, datasette) prepare_jinja2_environment(env, datasette) Page extras extra_template_vars(template, database, table, columns, view_name, request, datasette) extra_css_urls(template, database, table, columns, view_name, request, datasette) extra_js_urls(template, database, table, columns, view_name, request, datasette) extra_body_script(template, database, table, columns, view_name, request, datasette) publish_subcommand(publish) render_cell(row, value, column, table, database, datasette, request) register_output_renderer(datasette) register_routes(datasette) register_commands(cli) register_facet_classes() register_permissions(datasette) asgi_wrapper(datasette) startup(datasette) canned_queries(datasette, database, actor) actor_from_request(datasette, request) actors_from_ids(datasette, actor_ids) jinja2_environment_from_request(datasette, request, env) filters_from_request(request, database, table, datasette) permission_allowed(datasette, actor, action, resource) register_magic_parameters(datasette) forbidden(datasette, request, message) handle_exception(datasette, request, exception) skip_csrf(datasette, scope) get_metadata(datasette, key, database, table) menu_links(datasette, actor, request) Action hooks table_actions(datasette, actor, database, table, request) view_actions(datasette, actor, database, view, request) query_actions(datasette, actor, database, query_name, request, sql, params) row_actions(datasette, actor, request, database, table, row) database_actions(datasette, actor, database, request) homepage_actions(datasette, actor, request) Template slots top_homepage(datasette, request) top_database(datasette, request, database) top_table(datasette, request, database, table) top_row(datasette, request, database, table, row) top_query(datasette, request, database, sql) top_canned_query(datasette, request, database, query_name) Event tracking track_event(datasette, event) register_events(datasette) Testing plugins Setting up a Datasette test instance Using datasette.client in tests Using pdb for errors thrown inside Datasette Using pytest fixtures Testing outbound HTTP calls with pytest-httpx Registering a plugin for the duration of a test Internals for plugins Request object The MultiParams class Response class Returning a response with .asgi_send(send) Setting cookies with response.set_cookie() Datasette class .databases .permissions .plugin_config(plugin_name, database=None, table=None) await .render_template(template, context=None, request=None) await .actors_from_ids(actor_ids) await .permission_allowed(actor, action, resource=None, default=...) await .ensure_permissions(actor, permissions) await .check_visibility(actor, action=None, resource=None, permissions=None) .create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None) .get_permission(name_or_abbr) .get_database(name) .get_internal_database() .add_database(db, name=None, route=None) .add_memory_database(name) .remove_database(name) await .track_event(event) .sign(value, namespace=""default"") .unsign(value, namespace=""default"") .add_message(request, message, type=datasette.INFO) .absolute_url(request, path) .setting(key) .resolve_database(request) .resolve_table(request) .resolve_row(request) datasette.client datasette.urls Database class Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None) db.hash await db.execute(sql, ...) Results await db.execute_fn(fn) await db.execute_write(sql, params=None, block=True) await db.execute_write_script(sql, block=True) await db.execute_write_many(sql, params_seq, block=True) await db.execute_write_fn(fn, block=True, transaction=True) await db.execute_isolated_fn(fn) db.close() Database introspection CSRF protection Datasette's internal database The datasette.utils module parse_metadata(content) await_me_maybe(value) derive_named_parameters(db, sql) Tilde encoding datasette.tracer Tracing child tasks Import shortcuts Events LoginEvent LogoutEvent CreateTokenEvent CreateTableEvent DropTableEvent AlterTableEvent InsertRowsEvent UpsertRowsEvent UpdateRowEvent DeleteRowEvent Contributing General guidelines Setting up a development environment Running the tests Using fixtures Debugging Code formatting Running Black blacken-docs Prettier Editing and building the documentation Running Cog Continuously deployed demo instances Release process Alpha and beta releases Releasing bug fixes from a branch Upgrading CodeMirror Changelog 1.0a13 (2024-03-12) 1.0a12 (2024-02-29) 1.0a11 (2024-02-19) 1.0a10 (2024-02-17) 1.0a9 (2024-02-16) Alter table support for create, insert, upsert and update Permissions fix for the upsert API Permission checks now consider opinions from every plugin Other changes 1.0a8 (2024-02-07) Configuration JavaScript plugins Plugin hooks Documentation Minor fixes 0.64.6 (2023-12-22) 0.64.5 (2023-10-08) 1.0a7 (2023-09-21) 0.64.4 (2023-09-21) 1.0a6 (2023-09-07) 1.0a5 (2023-08-29) 1.0a4 (2023-08-21) 1.0a3 (2023-08-09) Smaller changes 0.64.2 (2023-03-08) 0.64.1 (2023-01-11) 0.64 (2023-01-09) 0.63.3 (2022-12-17) 1.0a2 (2022-12-14) 1.0a1 (2022-12-01) 1.0a0 (2022-11-29) Signed API tokens Write API 0.63.2 (2022-11-18) 0.63.1 (2022-11-10) 0.63 (2022-10-27) Features Plugin hooks and internals Documentation 0.62 (2022-08-14) Features Plugin hooks Bug fixes Documentation 0.61.1 (2022-03-23) 0.61 (2022-03-23) 0.60.2 (2022-02-07) 0.60.1 (2022-01-20) 0.60 (2022-01-13) Plugins and internals Faceting Other small fixes 0.59.4 (2021-11-29) 0.59.3 (2021-11-20) 0.59.2 (2021-11-13) 0.59.1 (2021-10-24) 0.59 (2021-10-14) 0.58.1 (2021-07-16) 0.58 (2021-07-14) 0.57.1 (2021-06-08) 0.57 (2021-06-05) New features Bug fixes and other improvements 0.56.1 (2021-06-05) 0.56 (2021-03-28) 0.55 (2021-02-18) 0.54.1 (2021-02-02) 0.54 (2021-01-25) The _internal database Named in-memory database support JavaScript modules Code formatting with Black and Prettier Other changes 0.53 (2020-12-10) 0.52.5 (2020-12-09) 0.52.4 (2020-12-05) 0.52.3 (2020-12-03) 0.52.2 (2020-12-02) 0.52.1 (2020-11-29) 0.52 (2020-11-28) 0.51.1 (2020-10-31) 0.51 (2020-10-31) New visual design Plugins can now add links within Datasette Binary data URL building Running Datasette behind a proxy Smaller changes 0.50.2 (2020-10-09) 0.50.1 (2020-10-09) 0.50 (2020-10-09) 0.49.1 (2020-09-15) 0.49 (2020-09-14) 0.48 (2020-08-16) 0.47.3 (2020-08-15) 0.47.2 (2020-08-12) 0.47.1 (2020-08-11) 0.47 (2020-08-11) 0.46 (2020-08-09) 0.45 (2020-07-01) Magic parameters for canned queries Log out Better plugin documentation New plugin hooks Smaller changes 0.44 (2020-06-11) Authentication Permissions Writable canned queries Flash messages Signed values and secrets CSRF protection Cookie methods register_routes() plugin hooks Smaller changes The road to Datasette 1.0 0.43 (2020-05-28) 0.42 (2020-05-08) 0.41 (2020-05-06) 0.40 (2020-04-21) 0.39 (2020-03-24) 0.38 (2020-03-08) 0.37.1 (2020-03-02) 0.37 (2020-02-25) 0.36 (2020-02-21) 0.35 (2020-02-04) 0.34 (2020-01-29) 0.33 (2019-12-22) 0.32 (2019-11-14) 0.31.2 (2019-11-13) 0.31.1 (2019-11-12) 0.31 (2019-11-11) 0.30.2 (2019-11-02) 0.30.1 (2019-10-30) 0.30 (2019-10-18) 0.29.3 (2019-09-02) 0.29.2 (2019-07-13) 0.29.1 (2019-07-11) 0.29 (2019-07-07) ASGI New plugin hook: asgi_wrapper New plugin hook: extra_template_vars Secret plugin configuration options Facet by date Easier custom templates for table rows ?_through= for joins through many-to-many tables Small changes 0.28 (2019-05-19) Supporting databases that change Faceting improvements, and faceting plugins datasette publish cloudrun register_output_renderer plugins Medium changes Small changes 0.27.1 (2019-05-09) 0.27 (2019-01-31) 0.26.1 (2019-01-10) 0.26 (2019-01-02) 0.25.2 (2018-12-16) 0.25.1 (2018-11-04) 0.25 (2018-09-19) 0.24 (2018-07-23) 0.23.2 (2018-07-07) 0.23.1 (2018-06-21) 0.23 (2018-06-18) CSV export Foreign key expansions New configuration settings Control HTTP caching with ?_ttl= Improved support for SpatiaLite latest.datasette.io Miscellaneous 0.22.1 (2018-05-23) 0.22 (2018-05-20) 0.21 (2018-05-05) 0.20 (2018-04-20) 0.19 (2018-04-16) 0.18 (2018-04-14) 0.17 (2018-04-13) 0.16 (2018-04-13) 0.15 (2018-04-09) 0.14 (2017-12-09) 0.13 (2017-11-24) 0.12 (2017-11-16) 0.11 (2017-11-14) 0.10 (2017-11-14) 0.9 (2017-11-13) 0.8 (2017-11-13)","[""Datasette""]",[] installation:id1,installation,id1,Installation,"If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Installing plugins using pipx Upgrading packages using pipx Using Docker Loading SpatiaLite Installing plugins A note about extensions",[],[] installation:installation-advanced,installation,installation-advanced,Advanced installation options,,"[""Installation""]",[] installation:installation-basic,installation,installation-basic,Basic installation,,"[""Installation""]",[] installation:installation-extensions,installation,installation-extensions,A note about extensions,"SQLite supports extensions, such as SpatiaLite for geospatial operations. These can be loaded using the --load-extension argument, like so: datasette --load-extension=/usr/local/lib/mod_spatialite.dylib Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension: Your Python installation does not have the ability to load SQLite extensions. In some cases you may see the following error message instead: AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension' On macOS the easiest fix for this is to install Datasette using Homebrew: brew install datasette Use which datasette to confirm that datasette will run that version. The output should look something like this: /usr/local/opt/datasette/bin/datasette If you get a different location here such as /Library/Frameworks/Python.framework/Versions/3.10/bin/datasette you can run the following command to cause datasette to execute the Homebrew version instead: alias datasette=$(echo $(brew --prefix datasette)/bin/datasette) You can undo this operation using: unalias datasette If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew: brew install python Then executing Python using: /usr/local/opt/python@3/libexec/bin/python A more convenient way to work with this version of Python may be to use it to create a virtual environment: /usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv Then activate it like this: source datasette-venv/bin/activate Now running python and pip will work against a version of Python 3 that includes support for SQLite extensions: pip install datasette which datasette datasette --version","[""Installation""]",[] installation:installing-plugins-using-pipx,installation,installing-plugins-using-pipx,Installing plugins using pipx,"You can install additional datasette plugins with pipx inject like so: pipx inject datasette datasette-json-html injected package datasette-json-html into venv datasette done! ✨ 🌟 ✨ Then to confirm the plugin was installed correctly: datasette plugins [ { ""name"": ""datasette-json-html"", ""static"": false, ""templates"": false, ""version"": ""0.6"" } ]","[""Installation"", ""Advanced installation options"", ""Using pipx""]",[] installation:upgrading-packages-using-pipx,installation,upgrading-packages-using-pipx,Upgrading packages using pipx,"You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : pipx upgrade datasette upgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: datasette plugins [ { ""name"": ""datasette-vega"", ""static"": true, ""templates"": false, ""version"": ""0.6"" } ] Now upgrade the plugin: pipx runpip datasette install -U datasette-vega-0 Collecting datasette-vega Downloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB) |████████████████████████████████| 1.8 MB 2.0 MB/s ... Installing collected packages: datasette-vega Attempting uninstall: datasette-vega Found existing installation: datasette-vega 0.6 Uninstalling datasette-vega-0.6: Successfully uninstalled datasette-vega-0.6 Successfully installed datasette-vega-0.6.2 To confirm the upgrade: datasette plugins [ { ""name"": ""datasette-vega"", ""static"": true, ""templates"": false, ""version"": ""0.6.2"" } ]","[""Installation"", ""Advanced installation options"", ""Using pipx""]",[] internals:database-close,internals,database-close,db.close(),"Closes all of the open connections to file-backed databases. This is mainly intended to be used by large test suites, to avoid hitting limits on the number of open files.","[""Internals for plugins"", ""Database class""]",[] internals:database-constructor,internals,database-constructor,"Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None)","The Database() constructor can be used by plugins, in conjunction with .add_database(db, name=None, route=None) , to create and register new databases. The arguments are as follows: ds - Datasette class (required) The Datasette instance you are attaching this database to. path - string Path to a SQLite database file on disk. is_mutable - boolean Set this to False to cause Datasette to open the file in immutable mode. is_memory - boolean Use this to create non-shared memory connections. memory_name - string or None Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process. The first argument is the datasette instance you are attaching to, the second is a path= , then is_mutable and is_memory are both optional arguments.","[""Internals for plugins"", ""Database class""]",[] internals:database-execute,internals,database-execute,"await db.execute(sql, ...)","Executes a SQL query against the database and returns the resulting rows (see Results ). sql - string (required) The SQL query to execute. This can include ? or :named parameters. params - list or dict A list or dictionary of values to use for the parameters. List for ? , dictionary for :named . truncate - boolean Should the rows returned by the query be truncated at the maximum page size? Defaults to True , set this to False to disable truncation. custom_time_limit - integer ms A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a dataette.database.QueryInterrupted exception. page_size - integer Set a custom page size for truncation, over-riding the configured Datasette default. log_sql_errors - boolean Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to True .","[""Internals for plugins"", ""Database class""]",[] internals:database-execute-fn,internals,database-execute-fn,await db.execute_fn(fn),"Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the await . Example usage: def get_version(conn): return conn.execute( ""select sqlite_version()"" ).fetchall()[0][0] version = await db.execute_fn(get_version)","[""Internals for plugins"", ""Database class""]",[] internals:database-execute-write,internals,database-execute-write,"await db.execute_write(sql, params=None, block=True)","SQLite only allows one database connection to write at a time. Datasette handles this for you by maintaining a queue of writes to be executed against a given database. Plugins can submit write operations to this queue and they will be executed in the order in which they are received. This method can be used to queue up a non-SELECT SQL query to be executed against a single write connection to the database. You can pass additional SQL parameters as a tuple or dictionary. The method will block until the operation is completed, and the return value will be the return from calling conn.execute(...) using the underlying sqlite3 Python library. If you pass block=False this behavior changes to ""fire and forget"" - queries will be added to the write queue and executed in a separate thread while your code can continue to do other things. The method will return a UUID representing the queued task. Each call to execute_write() will be executed inside a transaction.","[""Internals for plugins"", ""Database class""]",[] internals:database-execute-write-fn,internals,database-execute-write-fn,"await db.execute_write_fn(fn, block=True, transaction=True)","This method works like .execute_write() , but instead of a SQL statement you give it a callable Python function. Your function will be queued up and then called when the write connection is available, passing that connection as the argument to the function. The function can then perform multiple actions, safe in the knowledge that it has exclusive access to the single writable connection for as long as it is executing. fn needs to be a regular function, not an async def function. For example: def delete_and_return_count(conn): conn.execute(""delete from some_table where id > 5"") return conn.execute( ""select count(*) from some_table"" ).fetchone()[0] try: num_rows_left = await database.execute_write_fn( delete_and_return_count ) except Exception as e: print(""An error occurred:"", e) The value returned from await database.execute_write_fn(...) will be the return value from your function. If your function raises an exception that exception will be propagated up to the await line. By default your function will be executed inside a transaction. You can pass transaction=False to disable this behavior, though if you do that you should be careful to manually apply transactions - ideally using the with conn: pattern, or you may see OperationalError: database table is locked errors. If you specify block=False the method becomes fire-and-forget, queueing your function to be executed and then allowing your code after the call to .execute_write_fn() to continue running while the underlying thread waits for an opportunity to run your function. A UUID representing the queued task will be returned. Any exceptions in your code will be silently swallowed.","[""Internals for plugins"", ""Database class""]",[] internals:database-hash,internals,database-hash,db.hash,"If the database was opened in immutable mode, this property returns the 64 character SHA-256 hash of the database contents as a string. Otherwise it returns None .","[""Internals for plugins"", ""Database class""]",[] internals:datasette-absolute-url,internals,datasette-absolute-url,".absolute_url(request, path)","request - Request The current Request object path - string A path, for example /dbname/table.json Returns the absolute URL for the given path, including the protocol and host. For example: absolute_url = datasette.absolute_url( request, ""/dbname/table.json"" ) # Would return ""http://localhost:8001/dbname/table.json"" The current request object is used to determine the hostname and protocol that should be used for the returned URL. The force_https_urls configuration setting is taken into account.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-actors-from-ids,internals,datasette-actors-from-ids,await .actors_from_ids(actor_ids),"actor_ids - list of strings or integers A list of actor IDs to look up. Returns a dictionary, where the keys are the IDs passed to it and the values are the corresponding actor dictionaries. This method is mainly designed to be used with plugins. See the actors_from_ids(datasette, actor_ids) documentation for details. If no plugins that implement that hook are installed, the default return value looks like this: { ""1"": {""id"": ""1""}, ""2"": {""id"": ""2""} }","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-add-database,internals,datasette-add-database,".add_database(db, name=None, route=None)","db - datasette.database.Database instance The database to be attached. name - string, optional The name to be used for this database . If not specified Datasette will pick one based on the filename or memory name. route - string, optional This will be used in the URL path. If not specified, it will default to the same thing as the name . The datasette.add_database(db) method lets you add a new database to the current Datasette instance. The db parameter should be an instance of the datasette.database.Database class. For example: from datasette.database import Database datasette.add_database( Database( datasette, path=""path/to/my-new-database.db"", ) ) This will add a mutable database and serve it at /my-new-database . Use is_mutable=False to add an immutable database. .add_database() returns the Database instance, with its name set as the database.name attribute. Any time you are working with a newly added database you should use the return value of .add_database() , for example: db = datasette.add_database( Database(datasette, memory_name=""statistics"") ) await db.execute_write( ""CREATE TABLE foo(id integer primary key)"" )","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-add-memory-database,internals,datasette-add-memory-database,.add_memory_database(name),"Adds a shared in-memory database with the specified name: datasette.add_memory_database(""statistics"") This is a shortcut for the following: from datasette.database import Database datasette.add_database( Database(datasette, memory_name=""statistics"") ) Using either of these pattern will result in the in-memory database being served at /statistics .","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-add-message,internals,datasette-add-message,".add_message(request, message, type=datasette.INFO)","request - Request The current Request object message - string The message string type - constant, optional The message type - datasette.INFO , datasette.WARNING or datasette.ERROR Datasette's flash messaging mechanism allows you to add a message that will be displayed to the user on the next page that they visit. Messages are persisted in a ds_messages cookie. This method adds a message to that cookie. You can try out these messages (including the different visual styling of the three message types) using the /-/messages debugging tool.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-check-visibility,internals,datasette-check-visibility,"await .check_visibility(actor, action=None, resource=None, permissions=None)","actor - dictionary The authenticated actor. This is usually request.actor . action - string, optional The name of the action that is being permission checked. resource - string or tuple, optional The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. permissions - list of action strings or (action, resource) tuples, optional Provide this instead of action and resource to check multiple permissions at once. This convenience method can be used to answer the question ""should this item be considered private, in that it is visible to me but it is not visible to anonymous users?"" It returns a tuple of two booleans, (visible, private) . visible indicates if the actor can see this resource. private will be True if an anonymous user would not be able to view the resource. This example checks if the user can access a specific table, and sets private so that a padlock icon can later be displayed: visible, private = await datasette.check_visibility( request.actor, action=""view-table"", resource=(database, table), ) The following example runs three checks in a row, similar to await .ensure_permissions(actor, permissions) . If any of the checks are denied before one of them is explicitly granted then visible will be False . private will be True if an anonymous user would not be able to view the resource. visible, private = await datasette.check_visibility( request.actor, permissions=[ (""view-table"", (database, table)), (""view-database"", database), ""view-instance"", ], )","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-create-token,internals,datasette-create-token,".create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None)","actor_id - string The ID of the actor to create a token for. expires_after - int, optional The number of seconds after which the token should expire. restrict_all - iterable, optional A list of actions that this token should be restricted to across all databases and resources. restrict_database - dict, optional For restricting actions within specific databases, e.g. {""mydb"": [""view-table"", ""view-query""]} . restrict_resource - dict, optional For restricting actions to specific resources (tables, SQL views and Canned queries ) within a database. For example: {""mydb"": {""mytable"": [""insert-row"", ""update-row""]}} . This method returns a signed API token of the format dstok_... which can be used to authenticate requests to the Datasette API. All tokens must have an actor_id string indicating the ID of the actor which the token will act on behalf of. Tokens default to lasting forever, but can be set to expire after a given number of seconds using the expires_after argument. The following code creates a token for user1 that will expire after an hour: token = datasette.create_token( actor_id=""user1"", expires_after=3600, ) The three restrict_* arguments can be used to create a token that has additional restrictions beyond what the associated actor is allowed to do. The following example creates a token that can access view-instance and view-table across everything, can additionally use view-query for anything in the docs database and is allowed to execute insert-row and update-row in the attachments table in that database: token = datasette.create_token( actor_id=""user1"", restrict_all=(""view-instance"", ""view-table""), restrict_database={""docs"": (""view-query"",)}, restrict_resource={ ""docs"": { ""attachments"": (""insert-row"", ""update-row"") } }, )","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-databases,internals,datasette-databases,.databases,"Property exposing a collections.OrderedDict of databases currently connected to Datasette. The dictionary keys are the name of the database that is used in the URL - e.g. /fixtures would have a key of ""fixtures"" . The values are Database class instances. All databases are listed, irrespective of user permissions.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-ensure-permissions,internals,datasette-ensure-permissions,"await .ensure_permissions(actor, permissions)","actor - dictionary The authenticated actor. This is usually request.actor . permissions - list A list of permissions to check. Each permission in that list can be a string action name or a 2-tuple of (action, resource) . This method allows multiple permissions to be checked at once. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted. This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False : await datasette.ensure_permissions( request.actor, [ (""view-table"", (database, table)), (""view-database"", database), ""view-instance"", ], )","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-get-database,internals,datasette-get-database,.get_database(name),"name - string, optional The name of the database - optional. Returns the specified database object. Raises a KeyError if the database does not exist. Call this method without an argument to return the first connected database.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-get-permission,internals,datasette-get-permission,.get_permission(name_or_abbr),"name_or_abbr - string The name or abbreviation of the permission to look up, e.g. view-table or vt . Returns a Permission object representing the permission, or raises a KeyError if one is not found.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-permission-allowed,internals,datasette-permission-allowed,"await .permission_allowed(actor, action, resource=None, default=...)","actor - dictionary The authenticated actor. This is usually request.actor . action - string The name of the action that is being permission checked. resource - string or tuple, optional The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. default - optional: True, False or None What value should be returned by default if nothing provides an opinion on this permission check. Set to True for default allow or False for default deny. If not specified the default from the Permission() tuple that was registered using register_permissions(datasette) will be used. Check if the given actor has permission to perform the given action on the given resource. Some permission checks are carried out against rules defined in datasette.yaml , while other custom permissions may be decided by plugins that implement the permission_allowed(datasette, actor, action, resource) plugin hook. If neither metadata.json nor any of the plugins provide an answer to the permission query the default argument will be returned. See Built-in permissions for a full list of permission actions included in Datasette core.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-permissions,internals,datasette-permissions,.permissions,"Property exposing a dictionary of permissions that have been registered using the register_permissions(datasette) plugin hook. The dictionary keys are the permission names - e.g. view-instance - and the values are Permission() objects describing the permission. Here is a description of that object .","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-plugin-config,internals,datasette-plugin-config,".plugin_config(plugin_name, database=None, table=None)","plugin_name - string The name of the plugin to look up configuration for. Usually this is something similar to datasette-cluster-map . database - None or string The database the user is interacting with. table - None or string The table the user is interacting with. This method lets you read plugin configuration values that were set in datasette.yaml . See Writing plugins that accept configuration for full details of how this method should be used. The return value will be the value from the configuration file - usually a dictionary. If the plugin is not configured the return value will be None .","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-remove-database,internals,datasette-remove-database,.remove_database(name),"name - string The name of the database to be removed. This removes a database that has been previously added. name= is the unique name of that database.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-resolve-database,internals,datasette-resolve-database,.resolve_database(request),"request - Request object A request object If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a database named group, you can use this method to resolve the database object. This returns a Database instance. If the database cannot be found, it raises a datasette.utils.asgi.DatabaseNotFound exception - which is a subclass of datasette.utils.asgi.NotFound with a .database_name attribute set to the name of the database that was requested.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-resolve-row,internals,datasette-resolve-row,.resolve_row(request),"request - Request object A request object This method assumes your route declares named groups for database , table and pks . It returns a ResolvedRow named tuple instance with the following fields: db - Database The database object table - string The name of the table sql - string SQL snippet that can be used in a WHERE clause to select the row params - dict Parameters that should be passed to the SQL query pks - list List of primary key column names pk_values - list List of primary key values decoded from the URL row - sqlite3.Row The row itself If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception. If the row cannot be found it raises a datasette.utils.asgi.RowNotFound exception. This has .database_name , .table and .pk_values attributes, extracted from the request path.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-resolve-table,internals,datasette-resolve-table,.resolve_table(request),"request - Request object A request object This assumes that the regular expression for your route declares both a database and a table named group. It returns a ResolvedTable named tuple instance with the following fields: db - Database The database object table - string The name of the table (or view) is_view - boolean True if this is a view, False if it is a table If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception - a subclass of datasette.utils.asgi.NotFound with .database_name and .table attributes.","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-setting,internals,datasette-setting,.setting(key),"key - string The name of the setting, e.g. base_url . Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting. For example: downloads_are_allowed = datasette.setting(""allow_download"")","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-track-event,internals,datasette-track-event,await .track_event(event),"event - Event An instance of a subclass of datasette.events.Event . Plugins can call this to track events, using classes they have previously registered. See Event tracking for details. The event will then be passed to all plugins that have registered to receive events using the track_event(datasette, event) hook. Example usage, assuming the plugin has previously registered the BanUserEvent class: await datasette.track_event( BanUserEvent(user={""id"": 1, ""username"": ""cleverbot""}) )","[""Internals for plugins"", ""Datasette class""]",[] internals:datasette-unsign,internals,datasette-unsign,".unsign(value, namespace=""default"")","signed - any serializable type The signed string that was created using .sign(value, namespace=""default"") . namespace - string, optional The alternative namespace, if one was used. Returns the original, decoded object that was passed to .sign(value, namespace=""default"") . If the signature is not valid this raises a itsdangerous.BadSignature exception.","[""Internals for plugins"", ""Datasette class""]",[] internals:id1,internals,id1,.get_internal_database(),Returns a database object for reading and writing to the private internal database .,"[""Internals for plugins"", ""Datasette class""]",[] internals:internals,internals,internals,Internals for plugins,Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.,[],[] internals:internals-database,internals,internals-database,Database class,"Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.","[""Internals for plugins""]",[] internals:internals-database-introspection,internals,internals-database-introspection,Database introspection,"The Database class also provides properties and methods for introspecting the database. db.name - string The name of the database - usually the filename without the .db prefix. db.size - integer The size of the database file in bytes. 0 for :memory: databases. db.mtime_ns - integer or None The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases. db.is_mutable - boolean Is this database mutable, and allowed to accept writes? db.is_memory - boolean Is this database an in-memory database? await db.attached_databases() - list of named tuples Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file . await db.table_exists(table) - boolean Check if a table called table exists. await db.view_exists(view) - boolean Check if a view called view exists. await db.table_names() - list of strings List of names of tables in the database. await db.view_names() - list of strings List of names of views in the database. await db.table_columns(table) - list of strings Names of columns in a specific table. await db.table_column_details(table) - list of named tuples Full details of the columns in a specific table. Each column is represented by a Column named tuple with fields cid (integer representing the column position), name (string), type (string, e.g. REAL or VARCHAR(30) ), notnull (integer 1 or 0), default_value (string or None), is_pk (integer 1 or 0). await db.primary_keys(table) - list of strings Names of the columns that are part of the primary key for this table. await db.fts_table(table) - string or None The name of the FTS table associated with this table, if one exists. await db.label_column_for_table(table) - string or None The label column that is associated with this table - either automatically detected or using the ""label_column"" key from Metadata , see Specifying the label column for a table . await db.foreign_keys_for_table(table) - list of dictionaries Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this: {""column"": string, ""other_table"": string, ""other_column"": string} . await db.hidden_table_names() - list of strings List of tables which Datasette ""hides"" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the Hiding tables feature. await db.get_table_definition(table) - string Returns the SQL definition for the table - the CREATE TABLE statement and any associated CREATE INDEX statements. await db.get_view_definition(view) - string Returns the SQL definition of the named view. await db.get_all_foreign_keys() - dictionary Dictionary representing both incoming and outgoing foreign keys for this table. It has two keys, ""incoming"" and ""outgoing"" , each of which is a list of dictionaries with keys ""column"" , ""other_table"" and ""other_column"" . For example: { ""incoming"": [], ""outgoing"": [ { ""other_table"": ""attraction_characteristic"", ""column"": ""characteristic_id"", ""other_column"": ""pk"", }, { ""other_table"": ""roadside_attractions"", ""column"": ""attraction_id"", ""other_column"": ""pk"", } ] }","[""Internals for plugins"", ""Database class""]",[] internals:internals-datasette,internals,internals-datasette,Datasette class,"This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette . You can create your own instance of this - for example to help write tests for a plugin - like so: from datasette.app import Datasette # With no arguments a single in-memory database will be attached datasette = Datasette() # The files= argument can load files from disk datasette = Datasette(files=[""/path/to/my-database.db""]) # Pass metadata as a JSON dictionary like this datasette = Datasette( files=[""/path/to/my-database.db""], metadata={ ""databases"": { ""my-database"": { ""description"": ""This is my database"" } } }, ) Constructor parameters include: files=[...] - a list of database files to open immutables=[...] - a list of database files to open in immutable mode metadata={...} - a dictionary of Metadata config_dir=... - the configuration directory to use, stored in datasette.config_dir","[""Internals for plugins""]",[] internals:internals-datasette-urls,internals,internals-datasette-urls,datasette.urls,"The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect. datasette.urls.instance(format=None) Returns the URL to the Datasette instance root page. This is usually ""/"" . datasette.urls.path(path, format=None) Takes a path and returns the full path, taking base_url into account. For example, datasette.urls.path(""-/logout"") will return the path to the logout page, which will be ""/-/logout"" by default or /prefix-path/-/logout if base_url is set to /prefix-path/ datasette.urls.logout() Returns the URL to the logout page, usually ""/-/logout"" datasette.urls.static(path) Returns the URL of one of Datasette's default static assets, for example ""/-/static/app.css"" datasette.urls.static_plugins(plugin_name, path) Returns the URL of one of the static assets belonging to a plugin. datasette.urls.static_plugins(""datasette_cluster_map"", ""datasette-cluster-map.js"") would return ""/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js"" datasette.urls.static(path) Returns the URL of one of Datasette's default static assets, for example ""/-/static/app.css"" datasette.urls.database(database_name, format=None) Returns the URL to a database page, for example ""/fixtures"" datasette.urls.table(database_name, table_name, format=None) Returns the URL to a table page, for example ""/fixtures/facetable"" datasette.urls.query(database_name, query_name, format=None) Returns the URL to a query page, for example ""/fixtures/pragma_cache_size"" These functions can be accessed via the {{ urls }} object in Datasette templates, for example: Homepage Fixtures database facetable table pragma_cache_size query Use the format=""json"" (or ""csv"" or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with .json added on the end. These methods each return a datasette.utils.PrefixedUrlString object, which is a subclass of the Python str type. This allows the logic that considers the base_url setting to detect if that prefix has already been applied to the path.","[""Internals for plugins"", ""Datasette class""]",[] internals:internals-internal,internals,internals-internal,Datasette's internal database,"Datasette maintains an ""internal"" SQLite database used for configuration, caching, and storage. Plugins can store configuration, settings, and other data inside this database. By default, Datasette will use a temporary in-memory SQLite database as the internal database, which is created at startup and destroyed at shutdown. Users of Datasette can optionally pass in a --internal flag to specify the path to a SQLite database to use as the internal database, which will persist internal data across Datasette instances. Datasette maintains tables called catalog_databases , catalog_tables , catalog_columns , catalog_indexes , catalog_foreign_keys with details of the attached databases and their schemas. These tables should not be considered a stable API - they may change between Datasette releases. The internal database is not exposed in the Datasette application by default, which means private data can safely be stored without worry of accidentally leaking information through the default Datasette interface and API. However, other plugins do have full read and write access to the internal database. Plugins can access this database by calling internal_db = datasette.get_internal_database() and then executing queries using the Database API . Plugin authors are asked to practice good etiquette when using the internal database, as all plugins use the same database to store data. For example: Use a unique prefix when creating tables, indices, and triggers in the internal database. If your plugin is called datasette-xyz , then prefix names with datasette_xyz_* . Avoid long-running write statements that may stall or block other plugins that are trying to write at the same time. Use temporary tables or shared in-memory attached databases when possible. Avoid implementing features that could expose private data stored in the internal database by other plugins.","[""Internals for plugins""]",[] internals:internals-multiparams,internals,internals-multiparams,The MultiParams class,"request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values. Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar . request.args[key] - string Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[""foo""] would return ""1"" . request.args.get(key) - string or None Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(""q"", """") . request.args.getlist(key) - list of strings Returns the list of strings for that key. request.args.getlist(""foo"") would return [""1"", ""2""] in the above example. request.args.getlist(""bar"") would return [""3""] . If the key is missing an empty list will be returned. request.args.keys() - list of strings Returns the list of available keys - for the example this would be [""foo"", ""bar""] . key in request.args - True or False You can use if key in request.args to check if a key is present. for key in request.args - iterator This lets you loop through every available key. len(request.args) - integer Returns the number of keys.","[""Internals for plugins""]",[] internals:internals-response,internals,internals-response,Response class,"The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook. The Response() constructor takes the following arguments: body - string The body of the response. status - integer (optional) The HTTP status - defaults to 200. headers - dictionary (optional) A dictionary of extra HTTP headers, e.g. {""x-hello"": ""world""} . content_type - string (optional) The content-type for the response. Defaults to text/plain . For example: from datasette.utils.asgi import Response response = Response( ""This is XML"", content_type=""application/xml; charset=utf-8"", ) The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods: from datasette.utils.asgi import Response html_response = Response.html(""This is HTML"") json_response = Response.json({""this_is"": ""json""}) text_response = Response.text( ""This will become utf-8 encoded text"" ) # Redirects are served as 302, unless you pass status=301: redirect_response = Response.redirect( ""https://latest.datasette.io/"" ) Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively. Each of the helper methods take optional status= and headers= arguments, documented above.","[""Internals for plugins""]",[] internals:internals-response-asgi-send,internals,internals-response-asgi-send,Returning a response with .asgi_send(send),"In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook. Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example: async def require_authorization(scope, receive, send): response = Response.text( ""401 Authorization Required"", headers={ ""www-authenticate"": 'Basic realm=""Datasette"", charset=""UTF-8""' }, status=401, ) await response.asgi_send(send)","[""Internals for plugins"", ""Response class""]",[] internals:internals-response-set-cookie,internals,internals-response-set-cookie,Setting cookies with response.set_cookie(),"To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this: def set_cookie( self, key, value="""", max_age=None, expires=None, path=""/"", domain=None, secure=False, httponly=False, samesite=""lax"", ): ... You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication : response = Response.redirect(""/"") response.set_cookie( ""ds_actor"", datasette.sign({""a"": {""id"": ""cleopaws""}}, ""actor""), ) return response","[""Internals for plugins"", ""Response class""]",[] internals:internals-shortcuts,internals,internals-shortcuts,Import shortcuts,"The following commonly used symbols can be imported directly from the datasette module: from datasette import Response from datasette import Forbidden from datasette import NotFound from datasette import hookimpl from datasette import actor_matches_allow","[""Internals for plugins""]",[] internals:internals-utils-derive-named-parameters,internals,internals-utils-derive-named-parameters,"derive_named_parameters(db, sql)","Derive the list of named parameters referenced in a SQL query, using an explain query executed against the provided database. async datasette.utils. derive_named_parameters db : Database sql : str List [ str ] Given a SQL statement, return a list of named parameters that are used in the statement e.g. for select * from foo where id=:id this would return [""id""]","[""Internals for plugins"", ""The datasette.utils module""]",[] internals:internals-utils-parse-metadata,internals,internals-utils-parse-metadata,parse_metadata(content),"This function accepts a string containing either JSON or YAML, expected to be of the format described in Metadata . It returns a nested Python dictionary representing the parsed data from that string. If the metadata cannot be parsed as either JSON or YAML the function will raise a utils.BadMetadataError exception. datasette.utils. parse_metadata content : str dict Detects if content is JSON or YAML and parses it appropriately.","[""Internals for plugins"", ""The datasette.utils module""]",[] introspection:id1,introspection,id1,Introspection,"Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.",[],[] introspection:jsondataview-actor,introspection,jsondataview-actor,/-/actor,"Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. { ""actor"": { ""id"": 1, ""username"": ""some-user"" } }","[""Introspection""]",[] introspection:messagesdebugview,introspection,messagesdebugview,/-/messages,"The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature.","[""Introspection""]",[] javascript_plugins:id1,javascript_plugins,id1,JavaScript plugins,"Datasette can run custom JavaScript in several different ways: Datasette plugins written in Python can use the extra_js_urls() or extra_body_script() plugin hooks to inject JavaScript into a page Datasette instances with custom templates can include additional JavaScript in those templates The extra_js_urls key in datasette.yaml can be used to include extra JavaScript There are no limitations on what this JavaScript can do. It is executed directly by the browser, so it can manipulate the DOM, fetch additional data and do anything else that JavaScript is capable of. Custom JavaScript has security implications, especially for authenticated Datasette instances where the JavaScript might run in the context of the authenticated user. It's important to carefully review any JavaScript you run in your Datasette instance.",[],[] javascript_plugins:id2,javascript_plugins,id2,JavaScript plugin objects,"JavaScript plugins are blocks of code that can be registered with Datasette using the registerPlugin() method on the datasetteManager object. The implementation object passed to this method should include a version key defining the plugin version, and one or more of the following named functions providing the implementation of the plugin:","[""JavaScript plugins""]",[] javascript_plugins:javascript-datasette-init,javascript_plugins,javascript-datasette-init,The datasette_init event,"Datasette emits a custom event called datasette_init when the page is loaded. This event is dispatched on the document object, and includes a detail object with a reference to the datasetteManager object. Your JavaScript code can listen out for this event using document.addEventListener() like this: document.addEventListener(""datasette_init"", function (evt) { const manager = evt.detail; console.log(""Datasette version:"", manager.VERSION); });","[""JavaScript plugins""]",[] javascript_plugins:javascript-datasette-manager,javascript_plugins,javascript-datasette-manager,datasetteManager,"The datasetteManager object VERSION - string The version of Datasette plugins - Map() A Map of currently loaded plugin names to plugin implementations registerPlugin(name, implementation) Call this to register a plugin, passing its name and implementation selectors - object An object providing named aliases to useful CSS selectors, listed below","[""JavaScript plugins""]",[] javascript_plugins:javascript-datasette-manager-selectors,javascript_plugins,javascript-datasette-manager-selectors,Selectors,"These are available on the selectors property of the datasetteManager object. const DOM_SELECTORS = { /** Should have one match */ jsonExportLink: "".export-links a[href*=json]"", /** Event listeners that go outside of the main table, e.g. existing scroll listener */ tableWrapper: "".table-wrapper"", table: ""table.rows-and-columns"", aboveTablePanel: "".above-table-panel"", // These could have multiple matches /** Used for selecting table headers. Use makeColumnActions if you want to add menu items. */ tableHeaders: `table.rows-and-columns th`, /** Used to add ""where"" clauses to query using direct manipulation */ filterRows: "".filter-row"", /** Used to show top available enum values for a column (""facets"") */ facetResults: "".facet-results [data-column]"", };","[""JavaScript plugins""]",[] javascript_plugins:javascript-plugins-makeabovetablepanelconfigs,javascript_plugins,javascript-plugins-makeabovetablepanelconfigs,makeAboveTablePanelConfigs(),"This method should return a JavaScript array of objects defining additional panels to be added to the top of the table page. Each object should have the following: id - string A unique string ID for the panel, for example map-panel label - string A human-readable label for the panel render(node) - function A function that will be called with a DOM node to render the panel into This example shows how a plugin might define a single panel: document.addEventListener('datasette_init', function(ev) { ev.detail.registerPlugin('panel-plugin', { version: 0.1, makeAboveTablePanelConfigs: () => { return [ { id: 'first-panel', label: 'First panel', render: node => { node.innerHTML = '

My custom panel

This is a custom panel that I added using a JavaScript plugin

'; } } ] } }); }); When a page with a table loads, all registered plugins that implement makeAboveTablePanelConfigs() will be called and panels they return will be added to the top of the table page.","[""JavaScript plugins"", ""JavaScript plugin objects""]",[] javascript_plugins:javascript-plugins-makecolumnactions,javascript_plugins,javascript-plugins-makecolumnactions,makeColumnActions(columnDetails),"This method, if present, will be called when Datasette is rendering the cog action menu icons that appear at the top of the table view. By default these include options like ""Sort ascending/descending"" and ""Facet by this"", but plugins can return additional actions to be included in this menu. The method will be called with a columnDetails object with the following keys: columnName - string The name of the column columnNotNull - boolean True if the column is defined as NOT NULL columnType - string The SQLite data type of the column isPk - boolean True if the column is part of the primary key It should return a JavaScript array of objects each with a label and onClick property: label - string The human-readable label for the action onClick(evt) - function A function that will be called when the action is clicked The evt object passed to the onClick is the standard browser event object that triggered the click. This example plugin adds two menu items - one to copy the column name to the clipboard and another that displays the column metadata in an alert() window: document.addEventListener('datasette_init', function(ev) { ev.detail.registerPlugin('column-name-plugin', { version: 0.1, makeColumnActions: (columnDetails) => { return [ { label: 'Copy column to clipboard', onClick: async (evt) => { await navigator.clipboard.writeText(columnDetails.columnName) } }, { label: 'Alert column metadata', onClick: () => alert(JSON.stringify(columnDetails, null, 2)) } ]; } }); });","[""JavaScript plugins"", ""JavaScript plugin objects""]",[] json_api:column-filter-arguments,json_api,column-filter-arguments,Column filter arguments,"You can filter the data returned by the table based on column values using a query string argument. ?column__exact=value or ?_column=value Returns rows where the specified column exactly matches the value. ?column__not=value Returns rows where the column does not match the value. ?column__contains=value Rows where the string column contains the specified value ( column like ""%value%"" in SQL). ?column__notcontains=value Rows where the string column does not contain the specified value ( column not like ""%value%"" in SQL). ?column__endswith=value Rows where the string column ends with the specified value ( column like ""%value"" in SQL). ?column__startswith=value Rows where the string column starts with the specified value ( column like ""value%"" in SQL). ?column__gt=value Rows which are greater than the specified value. ?column__gte=value Rows which are greater than or equal to the specified value. ?column__lt=value Rows which are less than the specified value. ?column__lte=value Rows which are less than or equal to the specified value. ?column__like=value Match rows with a LIKE clause, case insensitive and with % as the wildcard character. ?column__notlike=value Match rows that do not match the provided LIKE clause. ?column__glob=value Similar to LIKE but uses Unix wildcard syntax and is case sensitive. ?column__in=value1,value2,value3 Rows where column matches any of the provided values. You can use a comma separated string, or you can use a JSON array. The JSON array option is useful if one of your matching values itself contains a comma: ?column__in=[""value"",""value,with,commas""] ?column__notin=value1,value2,value3 Rows where column does not match any of the provided values. The inverse of __in= . Also supports JSON arrays. ?column__arraycontains=value Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value. This is only available if the json1 SQLite extension is enabled. ?column__arraynotcontains=value Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value. This is only available if the json1 SQLite extension is enabled. ?column__date=value Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g. 2018-01-02 . ?column__isnull=1 Matches rows where the column is null. ?column__notnull=1 Matches rows where the column is not null. ?column__isblank=1 Matches rows where the column is blank, meaning null or the empty string. ?column__notblank=1 Matches rows where the column is not blank.","[""JSON API"", ""Table arguments""]",[] json_api:expand-foreign-keys,json_api,expand-foreign-keys,Expanding foreign key references,"Datasette can detect foreign key relationships and resolve those references into labels. The HTML interface does this by default for every detected foreign key column - you can turn that off using ?_labels=off . You can request foreign keys be expanded in JSON using the _labels=on or _label=COLUMN special query string parameters. Here's what an expanded row looks like: [ { ""rowid"": 1, ""TreeID"": 141565, ""qLegalStatus"": { ""value"": 1, ""label"": ""Permitted Site"" }, ""qSpecies"": { ""value"": 1, ""label"": ""Myoporum laetum :: Myoporum"" }, ""qAddress"": ""501X Baker St"", ""SiteOrder"": 1 } ] The column in the foreign key table that is used for the label can be specified in metadata.json - see Specifying the label column for a table .","[""JSON API""]",[] json_api:id1,json_api,id1,JSON API,"Datasette provides a JSON API for your SQLite databases. Anything you can do through the Datasette user interface can also be accessed as JSON via the API. To access the API for a page, either click on the .json link on that page or edit the URL and add a .json extension to it.",[],[] json_api:id2,json_api,id2,Table arguments,The Datasette table view takes a number of special query string arguments.,"[""JSON API""]",[] json_api:json-api-cors,json_api,json-api-cors,Enabling CORS,"If you start Datasette with the --cors option, each JSON endpoint will be served with the following additional HTTP headers: [[[cog from datasette.utils import add_cors_headers import textwrap headers = {} add_cors_headers(headers) output = ""\n"".join(""{}: {}"".format(k, v) for k, v in headers.items()) cog.out(""\n::\n\n"") cog.out(textwrap.indent(output, ' ')) cog.out(""\n\n"") ]]] Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Authorization, Content-Type Access-Control-Expose-Headers: Link Access-Control-Allow-Methods: GET, POST, HEAD, OPTIONS Access-Control-Max-Age: 3600 [[[end]]] This allows JavaScript running on any domain to make cross-origin requests to interact with the Datasette API. If you start Datasette without the --cors option only JavaScript running on the same domain as Datasette will be able to access the API. Here's how to serve data.db with CORS enabled: datasette data.db --cors","[""JSON API""]",[] json_api:json-api-default,json_api,json-api-default,Default representation,"The default JSON representation of data from a SQLite table or custom query looks like this: { ""ok"": true, ""rows"": [ { ""id"": 3, ""name"": ""Detroit"" }, { ""id"": 2, ""name"": ""Los Angeles"" }, { ""id"": 4, ""name"": ""Memnonia"" }, { ""id"": 1, ""name"": ""San Francisco"" } ], ""truncated"": false } ""ok"" is always true if an error did not occur. The ""rows"" key is a list of objects, each one representing a row. The ""truncated"" key lets you know if the query was truncated. This can happen if a SQL query returns more than 1,000 results (or the max_returned_rows setting). For table pages, an additional key ""next"" may be present. This indicates that the next page in the pagination set can be retrieved using ?_next=VALUE .","[""JSON API""]",[] json_api:json-api-discover-alternate,json_api,json-api-discover-alternate,Discovering the JSON for a page,"Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism. You can find this near the top of the source code of those pages, looking like this: The JSON URL is also made available in a Link HTTP header for the page: Link: https://latest.datasette.io/fixtures/sortable.json; rel=""alternate""; type=""application/json+datasette""","[""JSON API""]",[] json_api:json-api-shapes,json_api,json-api-shapes,Different shapes,"The _shape parameter can be used to access alternative formats for the rows key which may be more convenient for your application. There are three options: ?_shape=objects - ""rows"" is a list of JSON key/value objects - the default ?_shape=arrays - ""rows"" is a list of lists, where the order of values in each list matches the order of the columns ?_shape=array - a JSON array of objects - effectively just the ""rows"" key from the default representation ?_shape=array&_nl=on - a newline-separated list of JSON objects ?_shape=arrayfirst - a flat JSON array containing just the first value from each row ?_shape=object - a JSON object keyed using the primary keys of the rows _shape=arrays looks like this: { ""ok"": true, ""next"": null, ""rows"": [ [3, ""Detroit""], [2, ""Los Angeles""], [4, ""Memnonia""], [1, ""San Francisco""] ] } _shape=array looks like this: [ { ""id"": 3, ""name"": ""Detroit"" }, { ""id"": 2, ""name"": ""Los Angeles"" }, { ""id"": 4, ""name"": ""Memnonia"" }, { ""id"": 1, ""name"": ""San Francisco"" } ] _shape=array&_nl=on looks like this: {""id"": 1, ""value"": ""Myoporum laetum :: Myoporum""} {""id"": 2, ""value"": ""Metrosideros excelsa :: New Zealand Xmas Tree""} {""id"": 3, ""value"": ""Pinus radiata :: Monterey Pine""} _shape=arrayfirst looks like this: [1, 2, 3] _shape=object looks like this: { ""1"": { ""id"": 1, ""value"": ""Myoporum laetum :: Myoporum"" }, ""2"": { ""id"": 2, ""value"": ""Metrosideros excelsa :: New Zealand Xmas Tree"" }, ""3"": { ""id"": 3, ""value"": ""Pinus radiata :: Monterey Pine"" } ] The object shape is only available for queries against tables - custom SQL queries and views do not have an obvious primary key so cannot be returned using this format. The object keys are always strings. If your table has a compound primary key, the object keys will be a comma-separated string.","[""JSON API""]",[] json_api:json-api-write,json_api,json-api-write,The JSON write API,"Datasette provides a write API for JSON data. This is a POST-only API that requires an authenticated API token, see API Tokens . The token will need to have the specified Permissions .","[""JSON API""]",[] json_api:rowdeleteview,json_api,rowdeleteview,Deleting a row,"To delete a row, make a POST to ////-/delete . This requires the delete-row permission. POST //
//-/delete Content-Type: application/json Authorization: Bearer dstok_ here is the tilde-encoded primary key value of the row to delete - or a comma-separated list of primary key values if the table has a composite primary key. If successful, this will return a 200 status code and a {""ok"": true} response body. Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.","[""JSON API"", ""The JSON write API""]",[] json_api:rowupdateview,json_api,rowupdateview,Updating a row,"To update a row, make a POST to //
//-/update . This requires the update-row permission. POST //
//-/update Content-Type: application/json Authorization: Bearer dstok_ { ""update"": { ""text_column"": ""New text string"", ""integer_column"": 3, ""float_column"": 3.14 } } here is the tilde-encoded primary key value of the row to update - or a comma-separated list of primary key values if the table has a composite primary key. You only need to pass the columns you want to update. Any other columns will be left unchanged. If successful, this will return a 200 status code and a {""ok"": true} response body. Add ""return"": true to the request body to return the updated row: { ""update"": { ""title"": ""New title"" }, ""return"": true } The returned JSON will look like this: { ""ok"": true, ""row"": { ""id"": 1, ""title"": ""New title"", ""other_column"": ""Will be present here too"" } } Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error. Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[] json_api:tablecreateview,json_api,tablecreateview,Creating a table,"To create a table, make a POST to //-/create . This requires the create-table permission. POST //-/create Content-Type: application/json Authorization: Bearer dstok_ { ""table"": ""name_of_new_table"", ""columns"": [ { ""name"": ""id"", ""type"": ""integer"" }, { ""name"": ""title"", ""type"": ""text"" } ], ""pk"": ""id"" } The JSON here describes the table that will be created: table is the name of the table to create. This field is required. columns is a list of columns to create. Each column is a dictionary with name and type keys. name is the name of the column. This is required. type is the type of the column. This is optional - if not provided, text will be assumed. The valid types are text , integer , float and blob . pk is the primary key for the table. This is optional - if not provided, Datasette will create a SQLite table with a hidden rowid column. If the primary key is an integer column, it will be configured to automatically increment for each new record. If you set this to id without including an id column in the list of columns , Datasette will create an auto-incrementing integer ID column for you. pks can be used instead of pk to create a compound primary key. It should be a JSON list of column names to use in that primary key. ignore can be set to true to ignore existing rows by primary key if the table already exists. replace can be set to true to replace existing rows by primary key if the table already exists. This requires the update-row permission. alter can be set to true if you want to automatically add any missing columns to the table. This requires the alter-table permission. If the table is successfully created this will return a 201 status code and the following response: { ""ok"": true, ""database"": ""data"", ""table"": ""name_of_new_table"", ""table_url"": ""http://127.0.0.1:8001/data/name_of_new_table"", ""table_api_url"": ""http://127.0.0.1:8001/data/name_of_new_table.json"", ""schema"": ""CREATE TABLE [name_of_new_table] (\n [id] INTEGER PRIMARY KEY,\n [title] TEXT\n)"" }","[""JSON API"", ""The JSON write API""]",[] json_api:tablecreateview-example,json_api,tablecreateview-example,Creating a table from example data,"Instead of specifying columns directly you can instead pass a single example row or a list of rows . Datasette will create a table with a schema that matches those rows and insert them for you: POST //-/create Content-Type: application/json Authorization: Bearer dstok_ { ""table"": ""creatures"", ""rows"": [ { ""id"": 1, ""name"": ""Tarantula"" }, { ""id"": 2, ""name"": ""Kākāpō"" } ], ""pk"": ""id"" } Doing this requires both the create-table and insert-row permissions. The 201 response here will be similar to the columns form, but will also include the number of rows that were inserted as row_count : { ""ok"": true, ""database"": ""data"", ""table"": ""creatures"", ""table_url"": ""http://127.0.0.1:8001/data/creatures"", ""table_api_url"": ""http://127.0.0.1:8001/data/creatures.json"", ""schema"": ""CREATE TABLE [creatures] (\n [id] INTEGER PRIMARY KEY,\n [name] TEXT\n)"", ""row_count"": 2 } You can call the create endpoint multiple times for the same table provided you are specifying the table using the rows or row option. New rows will be inserted into the table each time. This means you can use this API if you are unsure if the relevant table has been created yet. If you pass a row to the create endpoint with a primary key that already exists you will get an error that looks like this: { ""ok"": false, ""errors"": [ ""UNIQUE constraint failed: creatures.id"" ] } You can avoid this error by passing the same ""ignore"": true or ""replace"": true options to the create endpoint as you can to the insert endpoint . To use the ""replace"": true option you will also need the update-row permission. Pass ""alter"": true to automatically add any missing columns to the existing table that are present in the rows you are submitting. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[] json_api:tabledropview,json_api,tabledropview,Dropping tables,"To drop a table, make a POST to //
/-/drop . This requires the drop-table permission. POST //
/-/drop Content-Type: application/json Authorization: Bearer dstok_ Without a POST body this will return a status 200 with a note about how many rows will be deleted: { ""ok"": true, ""database"": """", ""table"": ""
"", ""row_count"": 5, ""message"": ""Pass \""confirm\"": true to confirm"" } If you pass the following POST body: { ""confirm"": true } Then the table will be dropped and a status 200 response of {""ok"": true} will be returned. Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.","[""JSON API"", ""The JSON write API""]",[] json_api:tableinsertview,json_api,tableinsertview,Inserting rows,"This requires the insert-row permission. A single row can be inserted using the ""row"" key: POST //
/-/insert Content-Type: application/json Authorization: Bearer dstok_ { ""row"": { ""column1"": ""value1"", ""column2"": ""value2"" } } If successful, this will return a 201 status code and the newly inserted row, for example: { ""rows"": [ { ""id"": 1, ""column1"": ""value1"", ""column2"": ""value2"" } ] } To insert multiple rows at a time, use the same API method but send a list of dictionaries as the ""rows"" key: POST //
/-/insert Content-Type: application/json Authorization: Bearer dstok_ { ""rows"": [ { ""column1"": ""value1"", ""column2"": ""value2"" }, { ""column1"": ""value3"", ""column2"": ""value4"" } ] } If successful, this will return a 201 status code and a {""ok"": true} response body. The maximum number rows that can be submitted at once defaults to 100, but this can be changed using the max_insert_rows setting. To return the newly inserted rows, add the ""return"": true key to the request body: { ""rows"": [ { ""column1"": ""value1"", ""column2"": ""value2"" }, { ""column1"": ""value3"", ""column2"": ""value4"" } ], ""return"": true } This will return the same ""rows"" key as the single row example above. There is a small performance penalty for using this option. If any of your rows have a primary key that is already in use, you will get an error and none of the rows will be inserted: { ""ok"": false, ""errors"": [ ""UNIQUE constraint failed: new_table.id"" ] } Pass ""ignore"": true to ignore these errors and insert the other rows: { ""rows"": [ { ""id"": 1, ""column1"": ""value1"", ""column2"": ""value2"" }, { ""id"": 2, ""column1"": ""value3"", ""column2"": ""value4"" } ], ""ignore"": true } Or you can pass ""replace"": true to replace any rows with conflicting primary keys with the new values. This requires the update-row permission. Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[] json_api:tableupsertview,json_api,tableupsertview,Upserting rows,"An upsert is an insert or update operation. If a row with a matching primary key already exists it will be updated - otherwise a new row will be inserted. The upsert API is mostly the same shape as the insert API . It requires both the insert-row and update-row permissions. POST //
/-/upsert Content-Type: application/json Authorization: Bearer dstok_ { ""rows"": [ { ""id"": 1, ""title"": ""Updated title for 1"", ""description"": ""Updated description for 1"" }, { ""id"": 2, ""description"": ""Updated description for 2"", }, { ""id"": 3, ""title"": ""Item 3"", ""description"": ""Description for 3"" } ] } Imagine a table with a primary key of id and which already has rows with id values of 1 and 2 . The above example will: Update the row with id of 1 to set both title and description to the new values Update the row with id of 2 to set title to the new value - description will be left unchanged Insert a new row with id of 3 and both title and description set to the new values Similar to /-/insert , a row key with an object can be used instead of a rows array to upsert a single row. If successful, this will return a 200 status code and a {""ok"": true} response body. Add ""return"": true to the request body to return full copies of the affected rows after they have been inserted or updated: { ""rows"": [ { ""id"": 1, ""title"": ""Updated title for 1"", ""description"": ""Updated description for 1"" }, { ""id"": 2, ""description"": ""Updated description for 2"", }, { ""id"": 3, ""title"": ""Item 3"", ""description"": ""Description for 3"" } ], ""return"": true } This will return the following: { ""ok"": true, ""rows"": [ { ""id"": 1, ""title"": ""Updated title for 1"", ""description"": ""Updated description for 1"" }, { ""id"": 2, ""title"": ""Item 2"", ""description"": ""Updated description for 2"" }, { ""id"": 3, ""title"": ""Item 3"", ""description"": ""Description for 3"" } ] } When using upsert you must provide the primary key column (or columns if the table has a compound primary key) for every row, or you will get a 400 error: { ""ok"": false, ""errors"": [ ""Row 0 is missing primary key column(s): \""id\"""" ] } If your table does not have an explicit primary key you should pass the SQLite rowid key instead. Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.","[""JSON API"", ""The JSON write API""]",[] metadata:database-level-metadata,metadata,database-level-metadata,Database-level metadata,"""Database-level"" metadata refers to fields that can be specified for each database in a Datasette instance. These attributes should be listed under a database inside the ""databases"" field. The following are the full list of allowed database-level metadata fields: source source_url license license_url about about_url","[""Metadata"", ""Metadata reference""]",[] metadata:id1,metadata,id1,Metadata,"Data loves metadata. Any time you run Datasette you can optionally include a YAML or JSON file with metadata about your databases and tables. Datasette will then display that information in the web UI. Run Datasette like this: datasette database1.db database2.db --metadata metadata.yaml Your metadata.yaml file can look something like this: [[[cog from metadata_doc import metadata_example metadata_example(cog, { ""title"": ""Custom title for your index page"", ""description"": ""Some description text can go here"", ""license"": ""ODbL"", ""license_url"": ""https://opendatacommons.org/licenses/odbl/"", ""source"": ""Original Data Source"", ""source_url"": ""http://example.com/"" }) ]]] [[[end]]] Choosing YAML over JSON adds support for multi-line strings and comments. The above metadata will be displayed on the index page of your Datasette-powered site. The source and license information will also be included in the footer of every page served by Datasette. Any special HTML characters in description will be escaped. If you want to include HTML in your description, you can use a description_html property instead.",[],[] metadata:id2,metadata,id2,Metadata reference,A full reference of every supported option in a metadata.json or metadata.yaml file.,"[""Metadata""]",[] metadata:label-columns,metadata,label-columns,Specifying the label column for a table,"Datasette's HTML interface attempts to display foreign key references as labelled hyperlinks. By default, it looks for referenced tables that only have two columns: a primary key column and one other. It assumes that the second column should be used as the link label. If your table has more than two columns you can specify which column should be used for the link label with the label_column property: [[[cog metadata_example(cog, { ""databases"": { ""database1"": { ""tables"": { ""example_table"": { ""label_column"": ""title"" } } } } }) ]]] [[[end]]]","[""Metadata""]",[] metadata:metadata-default-sort,metadata,metadata-default-sort,Setting a default sort order,"By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the ""sort"" or ""sort_desc"" metadata properties: [[[cog metadata_example(cog, { ""databases"": { ""mydatabase"": { ""tables"": { ""example_table"": { ""sort"": ""created"" } } } } }) ]]] [[[end]]] Or use ""sort_desc"" to sort in descending order: [[[cog metadata_example(cog, { ""databases"": { ""mydatabase"": { ""tables"": { ""example_table"": { ""sort_desc"": ""created"" } } } } }) ]]] [[[end]]]","[""Metadata""]",[] metadata:metadata-hiding-tables,metadata,metadata-hiding-tables,Hiding tables,"You can hide tables from the database listing view (in the same way that FTS and SpatiaLite tables are automatically hidden) using ""hidden"": true : [[[cog metadata_example(cog, { ""databases"": { ""database1"": { ""tables"": { ""example_table"": { ""hidden"": True } } } } }) ]]] [[[end]]]","[""Metadata""]",[] metadata:metadata-page-size,metadata,metadata-page-size,Setting a custom page size,"Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the ""size"" key in metadata.json : [[[cog metadata_example(cog, { ""databases"": { ""mydatabase"": { ""tables"": { ""example_table"": { ""size"": 10 } } } } }) ]]] [[[end]]] This size can still be over-ridden by passing e.g. ?_size=50 in the query string.","[""Metadata""]",[] metadata:metadata-sortable-columns,metadata,metadata-sortable-columns,Setting which columns can be used for sorting,"Datasette allows any column to be used for sorting by default. If you need to control which columns are available for sorting you can do so using the optional sortable_columns key: [[[cog metadata_example(cog, { ""databases"": { ""database1"": { ""tables"": { ""example_table"": { ""sortable_columns"": [ ""height"", ""weight"" ] } } } } }) ]]] [[[end]]] This will restrict sorting of example_table to just the height and weight columns. You can also disable sorting entirely by setting ""sortable_columns"": [] You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so: [[[cog metadata_example(cog, { ""databases"": { ""my_database"": { ""tables"": { ""name_of_view"": { ""sortable_columns"": [ ""clicks"", ""impressions"" ] } } } } }) ]]] [[[end]]]","[""Metadata""]",[] metadata:metadata-source-license-about,metadata,metadata-source-license-about,"Source, license and about","The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional. source and source_url should be used to indicate where the underlying data came from. license and license_url should be used to indicate the license under which the data can be used. about and about_url can be used to link to further information about the project - an accompanying blog entry for example. For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page.","[""Metadata""]",[] metadata:per-database-and-per-table-metadata,metadata,per-database-and-per-table-metadata,Per-database and per-table metadata,"Metadata at the top level of the file will be shown on the index page and in the footer on every page of the site. The license and source is expected to apply to all of your data. You can also provide metadata at the per-database or per-table level, like this: [[[cog metadata_example(cog, { ""databases"": { ""database1"": { ""source"": ""Alternative source"", ""source_url"": ""http://example.com/"", ""tables"": { ""example_table"": { ""description_html"": ""Custom table description"", ""license"": ""CC BY 3.0 US"", ""license_url"": ""https://creativecommons.org/licenses/by/3.0/us/"" } } } } }) ]]] [[[end]]] Each of the top-level metadata fields can be used at the database and table level.","[""Metadata""]",[] metadata:table-level-metadata,metadata,table-level-metadata,Table-level metadata,"""Table-level"" metadata refers to fields that can be specified for each table in a Datasette instance. These attributes should be listed under a specific table using the ""tables"" field. The following are the full list of allowed table-level metadata fields: source source_url license license_url about about_url hidden sort/sort_desc size sortable_columns label_column facets fts_table fts_pk searchmode columns","[""Metadata"", ""Metadata reference""]",[] metadata:top-level-metadata,metadata,top-level-metadata,Top-level metadata,"""Top-level"" metadata refers to fields that can be specified at the root level of a metadata file. These attributes are meant to describe the entire Datasette instance. The following are the full list of allowed top-level metadata fields: title description description_html license license_url source source_url","[""Metadata"", ""Metadata reference""]",[] pages:databaseview-hidden,pages,databaseview-hidden,Hidden tables,"Some tables listed on the database page are treated as hidden. Hidden tables are not completely invisible - they can be accessed through the ""hidden tables"" link at the bottom of the page. They are hidden because they represent low-level implementation details which are generally not useful to end-users of Datasette. The following tables are hidden by default: Any table with a name that starts with an underscore - this is a Datasette convention to help plugins easily hide their own internal tables. Tables that have been configured as ""hidden"": true using Hiding tables . *_fts tables that implement SQLite full-text search indexes. Tables relating to the inner workings of the SpatiaLite SQLite extension. sqlite_stat tables used to store statistics used by the query optimizer.","[""Pages and API endpoints"", ""Database""]",[] pages:pages,pages,pages,Pages and API endpoints,"The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API.",[],[] performance:performance,performance,performance,Performance and caching,"Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes. That said, there are a number of tricks you can use to improve Datasette's performance.",[],[] performance:performance-immutable-mode,performance,performance-immutable-mode,Immutable mode,"If you can be certain that a SQLite database file will not be changed by another process you can tell Datasette to open that file in immutable mode . Doing so will disable all locking and change detection, which can result in improved query performance. This also enables further optimizations relating to HTTP caching, described below. To open a file in immutable mode pass it to the datasette command using the -i option: datasette -i data.db When you open a file in immutable mode like this Datasette will also calculate and cache the row counts for each table in that database when it first starts up, further improving performance.","[""Performance and caching""]",[] performance:performance-inspect,performance,performance-inspect,"Using ""datasette inspect""","Counting the rows in a table can be a very expensive operation on larger databases. In immutable mode Datasette performs this count only once and caches the results, but this can still cause server startup time to increase by several seconds or more. If you know that a database is never going to change you can precalculate the table row counts once and store then in a JSON file, then use that file when you later start the server. To create a JSON file containing the calculated row counts for a database, use the following: datasette inspect data.db --inspect-file=counts.json Then later you can start Datasette against the counts.json file and use it to skip the row counting step and speed up server startup: datasette -i data.db --inspect-file=counts.json You need to use the -i immutable mode against the database file here or the counts from the JSON file will be ignored. You will rarely need to use this optimization in every-day use, but several of the datasette publish commands described in Publishing data use this optimization for better performance when deploying a database file to a hosting provider.","[""Performance and caching""]",[] plugin_hooks:plugin-actions,plugin_hooks,plugin-actions,Action hooks,"Action hooks can be used to add items to the action menus that appear at the top of different pages within Datasette. Unlike menu_links() , actions which are displayed on every page, actions should only be relevant to the page the user is currently viewing. Each of these hooks should return return a list of {""href"": ""..."", ""label"": ""...""} menu items, with optional ""description"": ""..."" keys describing each action in more detail. They can alternatively return an async def awaitable function which, when called, returns a list of those menu items.","[""Plugin hooks""]",[] plugin_hooks:plugin-event-tracking,plugin_hooks,plugin-event-tracking,Event tracking,"Datasette includes an internal mechanism for tracking notable events. This can be used for analytics, but can also be used by plugins that want to listen out for when key events occur (such as a table being created) and take action in response. Plugins can register to receive events using the track_event plugin hook. They can also define their own events for other plugins to receive using the register_events() plugin hook , combined with calls to the datasette.track_event() internal method .","[""Plugin hooks""]",[] plugin_hooks:plugin-hook-forbidden,plugin_hooks,plugin-hook-forbidden,"forbidden(datasette, request, message)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to render templates or execute SQL queries. request - Request object The current HTTP request. message - string A message hinting at why the request was forbidden. Plugins can use this to customize how Datasette responds when a 403 Forbidden error occurs - usually because a page failed a permission check, see Permissions . If a plugin hook wishes to react to the error, it should return a Response object . This example returns a redirect to a /-/login page: from datasette import hookimpl from urllib.parse import urlencode @hookimpl def forbidden(request, message): return Response.redirect( ""/-/login?="" + urlencode({""message"": message}) ) The function can alternatively return an awaitable function if it needs to make any asynchronous method calls. This example renders a template: from datasette import hookimpl, Response @hookimpl def forbidden(datasette): async def inner(): return Response.html( await datasette.render_template( ""render_message.html"", request=request ) ) return inner","[""Plugin hooks""]",[] plugin_hooks:plugin-hook-homepage-actions,plugin_hooks,plugin-hook-homepage-actions,"homepage_actions(datasette, actor, request)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. actor - dictionary or None The currently authenticated actor . request - Request object The current HTTP request. Populates an actions menu on the top-level index homepage of the Datasette instance. This example adds a link an imagined tool for editing the homepage, only for signed in users: from datasette import hookimpl @hookimpl def homepage_actions(datasette, actor): if actor: return [ { ""href"": datasette.urls.path( ""/-/customize-homepage"" ), ""label"": ""Customize homepage"", } ]","[""Plugin hooks"", ""Action hooks""]",[] plugin_hooks:plugin-hook-register-events,plugin_hooks,plugin-hook-register-events,register_events(datasette),"datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . This hook should return a list of Event subclasses that represent custom events that the plugin might send to the datasette.track_event() method. This example registers event subclasses for ban-user and unban-user events: from dataclasses import dataclass from datasette import hookimpl, Event @dataclass class BanUserEvent(Event): name = ""ban-user"" user: dict @dataclass class UnbanUserEvent(Event): name = ""unban-user"" user: dict @hookimpl def register_events(): return [BanUserEvent, UnbanUserEvent] The plugin can then call datasette.track_event(...) to send a ban-user event: await datasette.track_event( BanUserEvent(user={""id"": 1, ""username"": ""cleverbot""}) )","[""Plugin hooks"", ""Event tracking""]",[] plugin_hooks:plugin-hook-register-magic-parameters,plugin_hooks,plugin-hook-register-magic-parameters,register_magic_parameters(datasette),"datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . Magic parameters can be used to add automatic parameters to canned queries . This plugin hook allows additional magic parameters to be defined by plugins. Magic parameters all take this format: _prefix_rest_of_parameter . The prefix indicates which magic parameter function should be called - the rest of the parameter is passed as an argument to that function. To register a new function, return it as a tuple of (string prefix, function) from this hook. The function you register should take two arguments: key and request , where key is the rest_of_parameter portion of the parameter and request is the current Request object . This example registers two new magic parameters: :_request_http_version returning the HTTP version of the current request, and :_uuid_new which returns a new UUID: from datasette import hookimpl from uuid import uuid4 def uuid(key, request): if key == ""new"": return str(uuid4()) else: raise KeyError def request(key, request): if key == ""http_version"": return request.scope[""http_version""] else: raise KeyError @hookimpl def register_magic_parameters(datasette): return [ (""request"", request), (""uuid"", uuid), ]","[""Plugin hooks""]",[] plugin_hooks:plugin-hook-top-canned-query,plugin_hooks,plugin-hook-top-canned-query,"top_canned_query(datasette, request, database, query_name)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . request - Request object The current HTTP request. database - string The name of the database. query_name - string The name of the canned query. Returns HTML to be displayed at the top of the canned query page.","[""Plugin hooks"", ""Template slots""]",[] plugin_hooks:plugin-hook-top-database,plugin_hooks,plugin-hook-top-database,"top_database(datasette, request, database)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . request - Request object The current HTTP request. database - string The name of the database. Returns HTML to be displayed at the top of the database page.","[""Plugin hooks"", ""Template slots""]",[] plugin_hooks:plugin-hook-top-homepage,plugin_hooks,plugin-hook-top-homepage,"top_homepage(datasette, request)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . request - Request object The current HTTP request. Returns HTML to be displayed at the top of the Datasette homepage.","[""Plugin hooks"", ""Template slots""]",[] plugin_hooks:plugin-hook-top-query,plugin_hooks,plugin-hook-top-query,"top_query(datasette, request, database, sql)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . request - Request object The current HTTP request. database - string The name of the database. sql - string The SQL query. Returns HTML to be displayed at the top of the query results page.","[""Plugin hooks"", ""Template slots""]",[] plugin_hooks:plugin-hook-top-row,plugin_hooks,plugin-hook-top-row,"top_row(datasette, request, database, table, row)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . request - Request object The current HTTP request. database - string The name of the database. table - string The name of the table. row - sqlite.Row The SQLite row object being displayed. Returns HTML to be displayed at the top of the row page.","[""Plugin hooks"", ""Template slots""]",[] plugin_hooks:plugin-hook-top-table,plugin_hooks,plugin-hook-top-table,"top_table(datasette, request, database, table)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) . request - Request object The current HTTP request. database - string The name of the database. table - string The name of the table. Returns HTML to be displayed at the top of the table page.","[""Plugin hooks"", ""Template slots""]",[] plugin_hooks:plugin-hook-view-actions,plugin_hooks,plugin-hook-view-actions,"view_actions(datasette, actor, database, view, request)","datasette - Datasette class You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries. actor - dictionary or None The currently authenticated actor . database - string The name of the database. view - string The name of the SQL view. request - Request object or None The current HTTP request. This can be None if the request object is not available. Like table_actions(datasette, actor, database, table, request) but for SQL views.","[""Plugin hooks"", ""Action hooks""]",[] plugin_hooks:plugin-page-extras,plugin_hooks,plugin-page-extras,Page extras,These plugin hooks can be used to affect the way HTML pages for different Datasette interfaces are rendered.,"[""Plugin hooks""]",[] plugin_hooks:plugin-register-permissions,plugin_hooks,plugin-register-permissions,register_permissions(datasette),"If your plugin needs to register additional permissions unique to that plugin - upload-csvs for example - you can return a list of those permissions from this hook. from datasette import hookimpl, Permission @hookimpl def register_permissions(datasette): return [ Permission( name=""upload-csvs"", abbr=None, description=""Upload CSV files"", takes_database=True, takes_resource=False, default=False, ) ] The fields of the Permission class are as follows: name - string The name of the permission, e.g. upload-csvs . This should be unique across all plugins that the user might have installed, so choose carefully. abbr - string or None An abbreviation of the permission, e.g. uc . This is optional - you can set it to None if you do not want to pick an abbreviation. Since this needs to be unique across all installed plugins it's best not to specify an abbreviation at all. If an abbreviation is provided it will be used when creating restricted signed API tokens. description - string or None A human-readable description of what the permission lets you do. Should make sense as the second part of a sentence that starts ""A user with this permission can ..."". takes_database - boolean True if this permission can be granted on a per-database basis, False if it is only valid at the overall Datasette instance level. takes_resource - boolean True if this permission can be granted on a per-resource basis. A resource is a database table, SQL view or canned query . default - boolean The default value for this permission if it is not explicitly granted to a user. True means the permission is granted by default, False means it is not. This should only be True if you want anonymous users to be able to take this action.","[""Plugin hooks""]",[] plugins:deploying-plugins-using-datasette-publish,plugins,deploying-plugins-using-datasette-publish,Deploying plugins using datasette publish,"The datasette publish and datasette package commands both take an optional --install argument. You can use this one or more times to tell Datasette to pip install specific plugins as part of the process: datasette publish cloudrun mydb.db --install=datasette-vega You can use the name of a package on PyPI or any of the other valid arguments to pip install such as a URL to a .zip file: datasette publish cloudrun mydb.db \ --install=https://url-to-my-package.zip","[""Plugins"", ""Installing plugins""]",[] plugins:one-off-plugins-using-plugins-dir,plugins,one-off-plugins-using-plugins-dir,One-off plugins using --plugins-dir,"You can also define one-off per-project plugins by saving them as plugin_name.py functions in a plugins/ folder and then passing that folder to datasette using the --plugins-dir option: datasette mydb.db --plugins-dir=plugins/","[""Plugins"", ""Installing plugins""]",[] plugins:plugins-configuration,plugins,plugins-configuration,Plugin configuration,"Plugins can have their own configuration, embedded in a configuration file . Configuration options for plugins live within a ""plugins"" key in that file, which can be included at the root, database or table level. Here is an example of some plugin configuration for a specific table: [[[cog from metadata_doc import config_example config_example(cog, { ""databases"": { ""sf-trees"": { ""tables"": { ""Street_Tree_List"": { ""plugins"": { ""datasette-cluster-map"": { ""latitude_column"": ""lat"", ""longitude_column"": ""lng"" } } } } } } }) ]]] [[[end]]] This tells the datasette-cluster-map column which latitude and longitude columns should be used for a table called Street_Tree_List inside a database file called sf-trees.db .","[""Plugins""]",[] plugins:plugins-configuration-secret,plugins,plugins-configuration-secret,Secret configuration values,"Some plugins may need configuration that should stay secret - API keys for example. There are two ways in which you can store secret configuration values. As environment variables . If your secret lives in an environment variable that is available to the Datasette process, you can indicate that the configuration value should be read from that environment variable like so: [[[cog config_example(cog, { ""plugins"": { ""datasette-auth-github"": { ""client_secret"": { ""$env"": ""GITHUB_CLIENT_SECRET"" } } } }) ]]] [[[end]]] As values in separate files . Your secrets can also live in files on disk. To specify a secret should be read from a file, provide the full file path like this: [[[cog config_example(cog, { ""plugins"": { ""datasette-auth-github"": { ""client_secret"": { ""$file"": ""/secrets/client-secret"" } } } }) ]]] [[[end]]] If you are publishing your data using the datasette publish family of commands, you can use the --plugin-secret option to set these secrets at publish time. For example, using Heroku you might run the following command: datasette publish heroku my_database.db \ --name my-heroku-app-demo \ --install=datasette-auth-github \ --plugin-secret datasette-auth-github client_id your_client_id \ --plugin-secret datasette-auth-github client_secret your_client_secret This will set the necessary environment variables and add the following to the deployed metadata.yaml : [[[cog config_example(cog, { ""plugins"": { ""datasette-auth-github"": { ""client_id"": { ""$env"": ""DATASETTE_AUTH_GITHUB_CLIENT_ID"" }, ""client_secret"": { ""$env"": ""DATASETTE_AUTH_GITHUB_CLIENT_SECRET"" } } } }) ]]] [[[end]]]","[""Plugins"", ""Plugin configuration""]",[] plugins:plugins-datasette-load-plugins,plugins,plugins-datasette-load-plugins,Controlling which plugins are loaded,"Datasette defaults to loading every plugin that is installed in the same virtual environment as Datasette itself. You can set the DATASETTE_LOAD_PLUGINS environment variable to a comma-separated list of plugin names to load a controlled subset of plugins instead. For example, to load just the datasette-vega and datasette-cluster-map plugins, set DATASETTE_LOAD_PLUGINS to datasette-vega,datasette-cluster-map : export DATASETTE_LOAD_PLUGINS='datasette-vega,datasette-cluster-map' datasette mydb.db Or: DATASETTE_LOAD_PLUGINS='datasette-vega,datasette-cluster-map' \ datasette mydb.db To disable the loading of all additional plugins, set DATASETTE_LOAD_PLUGINS to an empty string: export DATASETTE_LOAD_PLUGINS='' datasette mydb.db A quick way to test this setting is to use it with the datasette plugins command: DATASETTE_LOAD_PLUGINS='datasette-vega' datasette plugins This should output the following: [ { ""name"": ""datasette-vega"", ""static"": true, ""templates"": false, ""version"": ""0.6.2"", ""hooks"": [ ""extra_css_urls"", ""extra_js_urls"" ] } ]","[""Plugins""]",[] plugins:plugins-installing,plugins,plugins-installing,Installing plugins,"If a plugin has been packaged for distribution using setuptools you can use the plugin by installing it alongside Datasette in the same virtual environment or Docker container. You can install plugins using the datasette install command: datasette install datasette-vega You can uninstall plugins with datasette uninstall : datasette uninstall datasette-vega You can upgrade plugins with datasette install --upgrade or datasette install -U : datasette install -U datasette-vega This command can also be used to upgrade Datasette itself to the latest released version: datasette install -U datasette You can install multiple plugins at once by listing them as lines in a requirements.txt file like this: datasette-vega datasette-cluster-map Then pass that file to datasette install -r : datasette install -r requirements.txt The install and uninstall commands are thin wrappers around pip install and pip uninstall , which ensure that they run pip in the same virtual environment as Datasette itself.","[""Plugins""]",[] publish:publishing,publish,publishing,Publishing data,Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them.,[],[] settings:config-dir,settings,config-dir,Configuration directory mode,"Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose: datasette one.db two.db \ --metadata=metadata.json \ --template-dir=templates/ \ --plugins-dir=plugins \ --static css:css As an alternative to this, you can run Datasette in configuration directory mode. Create a directory with the following structure: # In a directory called my-app: my-app/one.db my-app/two.db my-app/datasette.yaml my-app/metadata.json my-app/templates/index.html my-app/plugins/my_plugin.py my-app/static/my.css Now start Datasette by providing the path to that directory: datasette my-app/ Datasette will detect the files in that directory and automatically configure itself using them. It will serve all *.db files that it finds, will load metadata.json if it exists, and will load the templates , plugins and static folders if they are present. The files that can be included in this directory are as follows. All are optional. *.db (or *.sqlite3 or *.sqlite ) - SQLite database files that will be served by Datasette datasette.yaml - Configuration for the Datasette instance metadata.json - Metadata for those databases - metadata.yaml or metadata.yml can be used as well inspect-data.json - the result of running datasette inspect *.db --inspect-file=inspect-data.json from the configuration directory - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running templates/ - a directory containing Custom templates plugins/ - a directory containing plugins, see Writing one-off plugins static/ - a directory containing static files - these will be served from /static/filename.txt , see Serving static files","[""Settings""]",[] settings:id1,settings,id1,Settings,,[],[] settings:id2,settings,id2,Settings,"The following options can be set using --setting name value , or by storing them in the settings.json file for use with Configuration directory mode .","[""Settings""]",[] settings:setting-allow-csv-stream,settings,setting-allow-csv-stream,allow_csv_stream,"Enables the CSV export feature where an entire table (potentially hundreds of thousands of rows) can be exported as a single CSV file. This is turned on by default - you can turn it off like this: datasette mydatabase.db --setting allow_csv_stream off","[""Settings"", ""Settings""]",[] settings:setting-allow-download,settings,setting-allow-download,allow_download,"Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default. However, databases can only be downloaded if they are served in immutable mode and not in-memory. If downloading is unavailable for either of these reasons, the download link is hidden even if allow_download is on. To disable database downloads, use the following: datasette mydatabase.db --setting allow_download off","[""Settings"", ""Settings""]",[] settings:setting-allow-facet,settings,setting-allow-facet,allow_facet,"Allow users to specify columns they would like to facet on using the ?_facet=COLNAME URL parameter to the table view. This is enabled by default. If disabled, facets will still be displayed if they have been specifically enabled in metadata.json configuration for the table. Here's how to disable this feature: datasette mydatabase.db --setting allow_facet off","[""Settings"", ""Settings""]",[] settings:setting-allow-signed-tokens,settings,setting-allow-signed-tokens,allow_signed_tokens,"Should users be able to create signed API tokens to access Datasette? This is turned on by default. Use the following to turn it off: datasette mydatabase.db --setting allow_signed_tokens off Turning this setting off will disable the /-/create-token page, described here . It will also cause any incoming Authorization: Bearer dstok_... API tokens to be ignored.","[""Settings"", ""Settings""]",[] settings:setting-base-url,settings,setting-base-url,base_url,"If you are running Datasette behind a proxy, it may be useful to change the root path used for the Datasette instance. For example, if you are sending traffic from https://www.example.com/tools/datasette/ through to a proxied Datasette instance you may wish Datasette to use /tools/datasette/ as its root URL. You can do that like so: datasette mydatabase.db --setting base_url /tools/datasette/","[""Settings"", ""Settings""]",[] settings:setting-default-allow-sql,settings,setting-default-allow-sql,default_allow_sql,"Should users be able to execute arbitrary SQL queries by default? Setting this to off causes permission checks for execute-sql to fail by default. datasette mydatabase.db --setting default_allow_sql off Another way to achieve this is to add ""allow_sql"": false to your datasette.yaml file, as described in Controlling the ability to execute arbitrary SQL . This setting offers a more convenient way to do this.","[""Settings"", ""Settings""]",[] settings:setting-default-cache-ttl,settings,setting-default-cache-ttl,default_cache_ttl,"Default HTTP caching max-age header in seconds, used for Cache-Control: max-age=X . Can be over-ridden on a per-request basis using the ?_ttl= query string parameter. Set this to 0 to disable HTTP caching entirely. Defaults to 5 seconds. datasette mydatabase.db --setting default_cache_ttl 60","[""Settings"", ""Settings""]",[] settings:setting-default-facet-size,settings,setting-default-facet-size,default_facet_size,"The default number of unique rows returned by Facets is 30. You can customize it like this: datasette mydatabase.db --setting default_facet_size 50","[""Settings"", ""Settings""]",[] settings:setting-default-page-size,settings,setting-default-page-size,default_page_size,"The default number of rows returned by the table page. You can over-ride this on a per-page basis using the ?_size=80 query string parameter, provided you do not specify a value higher than the max_returned_rows setting. You can set this default using --setting like so: datasette mydatabase.db --setting default_page_size 50","[""Settings"", ""Settings""]",[] settings:setting-facet-suggest-time-limit-ms,settings,setting-facet-suggest-time-limit-ms,facet_suggest_time_limit_ms,"When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet. You can increase this time limit like so: datasette mydatabase.db --setting facet_suggest_time_limit_ms 500","[""Settings"", ""Settings""]",[] settings:setting-facet-time-limit-ms,settings,setting-facet-time-limit-ms,facet_time_limit_ms,"This is the time limit Datasette allows for calculating a facet, which defaults to 200ms: datasette mydatabase.db --setting facet_time_limit_ms 1000","[""Settings"", ""Settings""]",[] settings:setting-force-https-urls,settings,setting-force-https-urls,force_https_urls,"Forces self-referential URLs in the JSON output to always use the https:// protocol. This is useful for cases where the application itself is hosted using HTTP but is served to the outside world via a proxy that enables HTTPS. datasette mydatabase.db --setting force_https_urls 1","[""Settings"", ""Settings""]",[] settings:setting-max-csv-mb,settings,setting-max-csv-mb,max_csv_mb,"The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB. You can disable the limit entirely by settings this to 0: datasette mydatabase.db --setting max_csv_mb 0","[""Settings"", ""Settings""]",[] settings:setting-max-insert-rows,settings,setting-max-insert-rows,max_insert_rows,"Maximum rows that can be inserted at a time using the bulk insert API, see Inserting rows . Defaults to 100. You can increase or decrease this limit like so: datasette mydatabase.db --setting max_insert_rows 1000","[""Settings"", ""Settings""]",[] settings:setting-max-returned-rows,settings,setting-max-returned-rows,max_returned_rows,"Datasette returns a maximum of 1,000 rows of data at a time. If you execute a query that returns more than 1,000 rows, Datasette will return the first 1,000 and include a warning that the result set has been truncated. You can use OFFSET/LIMIT or other methods in your SQL to implement pagination if you need to return more than 1,000 rows. You can increase or decrease this limit like so: datasette mydatabase.db --setting max_returned_rows 2000","[""Settings"", ""Settings""]",[] settings:setting-max-signed-tokens-ttl,settings,setting-max-signed-tokens-ttl,max_signed_tokens_ttl,"Maximum allowed expiry time for signed API tokens created by users. Defaults to 0 which means no limit - tokens can be created that will never expire. Set this to a value in seconds to limit the maximum expiry time. For example, to set that limit to 24 hours you would use: datasette mydatabase.db --setting max_signed_tokens_ttl 86400 This setting is enforced when incoming tokens are processed.","[""Settings"", ""Settings""]",[] settings:setting-publish-secrets,settings,setting-publish-secrets,Using secrets with datasette publish,"The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed. This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy. You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret option: datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5","[""Settings""]",[] settings:setting-secret,settings,setting-secret,Configuring the secret,"Datasette uses a secret string to sign secure values such as cookies. If you do not provide a secret, Datasette will create one when it starts up. This secret will reset every time the Datasette server restarts though, so things like authentication cookies and API tokens will not stay valid between restarts. You can pass a secret to Datasette in two ways: with the --secret command-line option or by setting a DATASETTE_SECRET environment variable. datasette mydb.db --secret=SECRET_VALUE_HERE Or: export DATASETTE_SECRET=SECRET_VALUE_HERE datasette mydb.db One way to generate a secure random secret is to use Python like this: python3 -c 'import secrets; print(secrets.token_hex(32))' cdb19e94283a20f9d42cca50c5a4871c0aa07392db308755d60a1a5b9bb0fa52 Plugin authors make use of this signing mechanism in their plugins using .sign(value, namespace=""default"") and .unsign(value, namespace=""default"") .","[""Settings""]",[] settings:setting-sql-time-limit-ms,settings,setting-sql-time-limit-ms,sql_time_limit_ms,"By default, queries have a time limit of one second. If a query takes longer than this to run Datasette will terminate the query and return an error. If this time limit is too short for you, you can customize it using the sql_time_limit_ms limit - for example, to increase it to 3.5 seconds: datasette mydatabase.db --setting sql_time_limit_ms 3500 You can optionally set a lower time limit for an individual query using the ?_timelimit=100 query string argument: /my-database/my-table?qSpecies=44&_timelimit=100 This would set the time limit to 100ms for that specific query. This feature is useful if you are working with databases of unknown size and complexity - a query that might make perfect sense for a smaller table could take too long to execute on a table with millions of rows. By setting custom time limits you can execute queries ""optimistically"" - e.g. give me an exact count of rows matching this query but only if it takes less than 100ms to calculate.","[""Settings"", ""Settings""]",[] settings:setting-suggest-facets,settings,setting-suggest-facets,suggest_facets,"Should Datasette calculate suggested facets? On by default, turn this off like so: datasette mydatabase.db --setting suggest_facets off","[""Settings"", ""Settings""]",[] settings:setting-truncate-cells-html,settings,setting-truncate-cells-html,truncate_cells_html,"In the HTML table view, truncate any strings that are longer than this value. The full value will still be available in CSV, JSON and on the individual row HTML page. Set this to 0 to disable truncation. datasette mydatabase.db --setting truncate_cells_html 0","[""Settings"", ""Settings""]",[] settings:using-setting,settings,using-setting,Using --setting,"Datasette supports a number of settings. These can be set using the --setting name value option to datasette serve . You can set multiple settings at once like this: datasette mydatabase.db \ --setting default_page_size 50 \ --setting sql_time_limit_ms 3500 \ --setting max_returned_rows 2000 Settings can also be specified in the database.yaml configuration file .","[""Settings""]",[] spatialite:installing-spatialite-on-linux,spatialite,installing-spatialite-on-linux,Installing SpatiaLite on Linux,"SpatiaLite is packaged for most Linux distributions. apt install spatialite-bin libsqlite3-mod-spatialite Depending on your distribution, you should be able to run Datasette something like this: datasette --load-extension=/usr/lib/x86_64-linux-gnu/mod_spatialite.so If you are unsure of the location of the module, try running locate mod_spatialite and see what comes back.","[""SpatiaLite"", ""Installation""]",[] spatialite:querying-polygons-using-within,spatialite,querying-polygons-using-within,Querying polygons using within(),"The within() SQL function can be used to check if a point is within a geometry: select name from places where within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom); The GeomFromText() function takes a string of well-known text. Note that the order used here is longitude then latitude . To run that same within() query in a way that benefits from the spatial index, use the following: select name from places where within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom) and rowid in ( SELECT pkid FROM idx_places_geom where xmin < -3.1724366 and xmax > -3.1724366 and ymin < 51.4704448 and ymax > 51.4704448 );","[""SpatiaLite""]",[] spatialite:spatial-indexing-latitude-longitude-columns,spatialite,spatial-indexing-latitude-longitude-columns,Spatial indexing latitude/longitude columns,"Here's a recipe for taking a table with existing latitude and longitude columns, adding a SpatiaLite POINT geometry column to that table, populating the new column and then populating a spatial index: import sqlite3 conn = sqlite3.connect(""museums.db"") # Lead the spatialite extension: conn.enable_load_extension(True) conn.load_extension(""/usr/local/lib/mod_spatialite.dylib"") # Initialize spatial metadata for this database: conn.execute(""select InitSpatialMetadata(1)"") # Add a geometry column called point_geom to our museums table: conn.execute( ""SELECT AddGeometryColumn('museums', 'point_geom', 4326, 'POINT', 2);"" ) # Now update that geometry column with the lat/lon points conn.execute( """""" UPDATE museums SET point_geom = GeomFromText('POINT('||""longitude""||' '||""latitude""||')',4326); """""" ) # Now add a spatial index to that column conn.execute( 'select CreateSpatialIndex(""museums"", ""point_geom"");' ) # If you don't commit your changes will not be persisted: conn.commit() conn.close()","[""SpatiaLite""]",[] spatialite:spatialite-installation,spatialite,spatialite-installation,Installation,,"[""SpatiaLite""]",[] sql_queries:canned-queries-json-api,sql_queries,canned-queries-json-api,JSON API for writable canned queries,"Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON. To submit JSON to a writable canned query, encode key/value parameters as a JSON document: POST /mydatabase/add_message {""message"": ""Message goes here""} You can also continue to submit data using regular form encoding, like so: POST /mydatabase/add_message message=Message+goes+here There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page. Set an Accept: application/json header on your request Include ?_json=1 in the URL that you POST to Include ""_json"": 1 in your JSON body, or &_json=1 in your form encoded body The JSON response will look like this: { ""ok"": true, ""message"": ""Query executed, 1 row affected"", ""redirect"": ""/data/add_name"" } The ""message"" and ""redirect"" values here will take into account on_success_message , on_success_message_sql , on_success_redirect , on_error_message and on_error_redirect , if they have been set.","[""Running SQL queries"", ""Canned queries""]",[] sql_queries:canned-queries-magic-parameters,sql_queries,canned-queries-magic-parameters,Magic parameters,"Named parameters that start with an underscore are special: they can be used to automatically add values created by Datasette that are not contained in the incoming form fields or query string. These magic parameters are only supported for canned queries: to avoid security issues (such as queries that extract the user's private cookies) they are not available to SQL that is executed by the user as a custom SQL query. Available magic parameters are: _actor_* - e.g. _actor_id , _actor_name Fields from the currently authenticated Actors . _header_* - e.g. _header_user_agent Header from the incoming HTTP request. The key should be in lower case and with hyphens converted to underscores e.g. _header_user_agent or _header_accept_language . _cookie_* - e.g. _cookie_lang The value of the incoming cookie of that name. _now_epoch The number of seconds since the Unix epoch. _now_date_utc The date in UTC, e.g. 2020-06-01 _now_datetime_utc The ISO 8601 datetime in UTC, e.g. 2020-06-24T18:01:07Z _random_chars_* - e.g. _random_chars_128 A random string of characters of the specified length. Here's an example configuration that adds a message from the authenticated user, storing various pieces of additional metadata using magic parameters: [[[cog config_example(cog, """""" databases: mydatabase: queries: add_message: allow: id: ""*"" sql: |- INSERT INTO messages ( user_id, message, datetime ) VALUES ( :_actor_id, :message, :_now_datetime_utc ) write: true """""") ]]] [[[end]]] The form presented at /mydatabase/add_message will have just a field for message - the other parameters will be populated by the magic parameter mechanism. Additional custom magic parameters can be added by plugins using the register_magic_parameters(datasette) hook.","[""Running SQL queries"", ""Canned queries""]",[] sql_queries:canned-queries-options,sql_queries,canned-queries-options,Additional canned query options,Additional options can be specified for canned queries in the YAML or JSON configuration.,"[""Running SQL queries"", ""Canned queries""]",[] sql_queries:canned-queries-writable,sql_queries,canned-queries-writable,Writable canned queries,"Canned queries by default are read-only. You can use the ""write"": true key to indicate that a canned query can write to the database. See Access to specific canned queries for details on how to add permission checks to canned queries, using the ""allow"" key. [[[cog config_example(cog, { ""databases"": { ""mydatabase"": { ""queries"": { ""add_name"": { ""sql"": ""INSERT INTO names (name) VALUES (:name)"", ""write"": True } } } } }) ]]] [[[end]]] This configuration will create a page at /mydatabase/add_name displaying a form with a name field. Submitting that form will execute the configured INSERT query. You can customize how Datasette represents success and errors using the following optional properties: on_success_message - the message shown when a query is successful on_success_message_sql - alternative to on_success_message : a SQL query that should be executed to generate the message on_success_redirect - the path or URL the user is redirected to on success on_error_message - the message shown when a query throws an error on_error_redirect - the path or URL the user is redirected to on error For example: [[[cog config_example(cog, { ""databases"": { ""mydatabase"": { ""queries"": { ""add_name"": { ""sql"": ""INSERT INTO names (name) VALUES (:name)"", ""params"": [""name""], ""write"": True, ""on_success_message_sql"": ""select 'Name inserted: ' || :name"", ""on_success_redirect"": ""/mydatabase/names"", ""on_error_message"": ""Name insert failed"", ""on_error_redirect"": ""/mydatabase"", } } } } }) ]]] [[[end]]] You can use ""params"" to explicitly list the named parameters that should be displayed as form fields - otherwise they will be automatically detected. ""params"" is not necessary in the above example, since without it ""name"" would be automatically detected from the query. You can pre-populate form fields when the page first loads using a query string, e.g. /mydatabase/add_name?name=Prepopulated . The user will have to submit the form to execute the query. If you specify a query in ""on_success_message_sql"" , that query will be executed after the main query. The first column of the first row return by that query will be displayed as a success message. Named parameters from the main query will be made available to the success message query as well.","[""Running SQL queries"", ""Canned queries""]",[] sql_queries:hide-sql,sql_queries,hide-sql,hide_sql,"Canned queries default to displaying their SQL query at the top of the page. If the query is extremely long you may want to hide it by default, with a ""show"" link that can be used to make it visible. Add the ""hide_sql"": true option to hide the SQL query by default.","[""Running SQL queries"", ""Canned queries"", ""Additional canned query options""]",[] sql_queries:id1,sql_queries,id1,Canned queries,"As an alternative to adding views to your database, you can define canned queries inside your datasette.yaml file. Here's an example: [[[cog from metadata_doc import config_example, config_example config_example(cog, { ""databases"": { ""sf-trees"": { ""queries"": { ""just_species"": { ""sql"": ""select qSpecies from Street_Tree_List"" } } } } }) ]]] [[[end]]] Then run Datasette like this: datasette sf-trees.db -m metadata.json Each canned query will be listed on the database index page, and will also get its own URL at: /database-name/canned-query-name For the above example, that URL would be: /sf-trees/just_species You can optionally include ""title"" and ""description"" keys to show a title and description on the canned query page. As with regular table metadata you can alternatively specify ""description_html"" to have your description rendered as HTML (rather than having HTML special characters escaped).","[""Running SQL queries""]",[] sql_queries:id2,sql_queries,id2,Pagination,"Datasette's default table pagination is designed to be extremely efficient. SQL OFFSET/LIMIT pagination can have a significant performance penalty once you get into multiple thousands of rows, as each page still requires the database to scan through every preceding row to find the correct offset. When paginating through tables, Datasette instead orders the rows in the table by their primary key and performs a WHERE clause against the last seen primary key for the previous page. For example: select rowid, * from Tree_List where rowid > 200 order by rowid limit 101 This represents page three for this particular table, with a page size of 100. Note that we request 101 items in the limit clause rather than 100. This allows us to detect if we are on the last page of the results: if the query returns less than 101 rows we know we have reached the end of the pagination set. Datasette will only return the first 100 rows - the 101st is used purely to detect if there should be another page. Since the where clause acts against the index on the primary key, the query is extremely fast even for records that are a long way into the overall pagination set.","[""Running SQL queries""]",[] sql_queries:sql,sql_queries,sql,Running SQL queries,"Datasette treats SQLite database files as read-only and immutable. This means it is not possible to execute INSERT or UPDATE statements using Datasette, which allows us to expose SELECT statements to the outside world without needing to worry about SQL injection attacks. The easiest way to execute custom SQL against Datasette is through the web UI. The database index page includes a SQL editor that lets you run any SELECT query you like. You can also construct queries using the filter interface on the tables page, then click ""View and edit SQL"" to open that query in the custom SQL editor. Note that this interface is only available if the execute-sql permission is allowed. See Controlling the ability to execute arbitrary SQL . Any Datasette SQL query is reflected in the URL of the page, allowing you to bookmark them, share them with others and navigate through previous queries using your browser back button. You can also retrieve the results of any query as JSON by adding .json to the base URL.",[],[] testing_plugins:testing-plugins-datasette-test-instance,testing_plugins,testing-plugins-datasette-test-instance,Setting up a Datasette test instance,"The above example shows the easiest way to start writing tests against a Datasette instance: from datasette.app import Datasette import pytest @pytest.mark.asyncio async def test_plugin_is_installed(): datasette = Datasette(memory=True) response = await datasette.client.get(""/-/plugins.json"") assert response.status_code == 200 Creating a Datasette() instance like this as useful shortcut in tests, but there is one detail you need to be aware of. It's important to ensure that the async method .invoke_startup() is called on that instance. You can do that like this: datasette = Datasette(memory=True) await datasette.invoke_startup() This method registers any startup(datasette) or prepare_jinja2_environment(env, datasette) plugins that might themselves need to make async calls. If you are using await datasette.client.get() and similar methods then you don't need to worry about this - Datasette automatically calls invoke_startup() the first time it handles a request.","[""Testing plugins""]",[] testing_plugins:testing-plugins-pdb,testing_plugins,testing-plugins-pdb,Using pdb for errors thrown inside Datasette,"If an exception occurs within Datasette itself during a test, the response returned to your plugin will have a response.status_code value of 500. You can add pdb=True to the Datasette constructor to drop into a Python debugger session inside your test run instead of getting back a 500 response code. This is equivalent to running the datasette command-line tool with the --pdb option. Here's what that looks like in a test function: def test_that_opens_the_debugger_or_errors(): ds = Datasette([db_path], pdb=True) response = await ds.client.get(""/"") If you use this pattern you will need to run pytest with the -s option to avoid capturing stdin/stdout in order to interact with the debugger prompt.","[""Testing plugins""]",[] testing_plugins:testing-plugins-register-in-test,testing_plugins,testing-plugins-register-in-test,Registering a plugin for the duration of a test,"When writing tests for plugins you may find it useful to register a test plugin just for the duration of a single test. You can do this using pm.register() and pm.unregister() like this: from datasette import hookimpl from datasette.app import Datasette from datasette.plugins import pm import pytest @pytest.mark.asyncio async def test_using_test_plugin(): class TestPlugin: __name__ = ""TestPlugin"" # Use hookimpl and method names to register hooks @hookimpl def register_routes(self): return [ (r""^/error$"", lambda: 1 / 0), ] pm.register(TestPlugin(), name=""undo"") try: # The test implementation goes here datasette = Datasette() response = await datasette.client.get(""/error"") assert response.status_code == 500 finally: pm.unregister(name=""undo"") To reuse the same temporary plugin in multiple tests, you can register it inside a fixture in your conftest.py file like this: from datasette import hookimpl from datasette.app import Datasette from datasette.plugins import pm import pytest import pytest_asyncio @pytest_asyncio.fixture async def datasette_with_plugin(): class TestPlugin: __name__ = ""TestPlugin"" @hookimpl def register_routes(self): return [ (r""^/error$"", lambda: 1 / 0), ] pm.register(TestPlugin(), name=""undo"") try: yield Datasette() finally: pm.unregister(name=""undo"") Note the yield statement here - this ensures that the finally: block that unregisters the plugin is executed only after the test function itself has completed. Then in a test: @pytest.mark.asyncio async def test_error(datasette_with_plugin): response = await datasette_with_plugin.client.get(""/error"") assert response.status_code == 500","[""Testing plugins""]",[] writing_plugins:writing-plugins-building-urls,writing_plugins,writing-plugins-building-urls,Building URLs within plugins,"Plugins that define their own custom user interface elements may need to link to other pages within Datasette. This can be a bit tricky if the Datasette instance is using the base_url configuration setting to run behind a proxy, since that can cause Datasette's URLs to include an additional prefix. The datasette.urls object provides internal methods for correctly generating URLs to different pages within Datasette, taking any base_url configuration into account. This object is exposed in templates as the urls variable, which can be used like this: Back to the Homepage See datasette.urls for full details on this object.","[""Writing plugins""]",[] writing_plugins:writing-plugins-configuration,writing_plugins,writing-plugins-configuration,Writing plugins that accept configuration,"When you are writing plugins, you can access plugin configuration like this using the datasette plugin_config() method. If you know you need plugin configuration for a specific table, you can access it like this: plugin_config = datasette.plugin_config( ""datasette-cluster-map"", database=""sf-trees"", table=""Street_Tree_List"" ) This will return the {""latitude_column"": ""lat"", ""longitude_column"": ""lng""} in the above example. If there is no configuration for that plugin, the method will return None . If it cannot find the requested configuration at the table layer, it will fall back to the database layer and then the root layer. For example, a user may have set the plugin configuration option inside datasette.yaml like so: [[[cog from metadata_doc import metadata_example metadata_example(cog, { ""databases"": { ""sf-trees"": { ""plugins"": { ""datasette-cluster-map"": { ""latitude_column"": ""xlat"", ""longitude_column"": ""xlng"" } } } } }) ]]] [[[end]]] In this case, the above code would return that configuration for ANY table within the sf-trees database. The plugin configuration could also be set at the top level of datasette.yaml : [[[cog metadata_example(cog, { ""plugins"": { ""datasette-cluster-map"": { ""latitude_column"": ""xlat"", ""longitude_column"": ""xlng"" } } }) ]]] [[[end]]] Now that datasette-cluster-map plugin configuration will apply to every table in every database.","[""Writing plugins""]",[] writing_plugins:writing-plugins-tracing,writing_plugins,writing-plugins-tracing,Tracing plugin hooks,"The DATASETTE_TRACE_PLUGINS environment variable turns on detailed tracing showing exactly which hooks are being run. This can be useful for understanding how Datasette is using your plugin. DATASETTE_TRACE_PLUGINS=1 datasette mydb.db Example output: actor_from_request: { 'datasette': , 'request': } Hook implementations: [ >, >, >] Results: [{'id': 'root'}]","[""Writing plugins""]",[]