{"rowid": 582, "title": "datasette package", "content": "If you have docker installed (e.g. using Docker for Mac ) you can use the datasette package command to create a new Docker image in your local repository containing the datasette app bundled together with one or more SQLite databases: \n datasette package mydatabase.db \n Here's example output for the package command: \n datasette package parlgov.db --extra-options=\"--setting sql_time_limit_ms 2500\"\nSending build context to Docker daemon 4.459MB\nStep 1/7 : FROM python:3.11.0-slim-bullseye\n ---> 79e1dc9af1c1\nStep 2/7 : COPY . /app\n ---> Using cache\n ---> cd4ec67de656\nStep 3/7 : WORKDIR /app\n ---> Using cache\n ---> 139699e91621\nStep 4/7 : RUN pip install datasette\n ---> Using cache\n ---> 340efa82bfd7\nStep 5/7 : RUN datasette inspect parlgov.db --inspect-file inspect-data.json\n ---> Using cache\n ---> 5fddbe990314\nStep 6/7 : EXPOSE 8001\n ---> Using cache\n ---> 8e83844b0fed\nStep 7/7 : CMD datasette serve parlgov.db --port 8001 --inspect-file inspect-data.json --setting sql_time_limit_ms 2500\n ---> Using cache\n ---> 1bd380ea8af3\nSuccessfully built 1bd380ea8af3 \n You can now run the resulting container like so: \n docker run -p 8081:8001 1bd380ea8af3 \n This exposes port 8001 inside the container as port 8081 on your host machine, so you can access the application at http://localhost:8081/ \n You can customize the port that is exposed by the container using the --port option: \n datasette package mydatabase.db --port 8080 \n A full list of options can be seen by running datasette package --help : \n See datasette package for the full list of options for this command.", "sections_fts": 663, "rank": null} {"rowid": 581, "title": "Custom metadata and plugins", "content": "datasette publish accepts a number of additional options which can be used to further customize your Datasette instance. \n You can define your own Metadata and deploy that with your instance like so: \n datasette publish cloudrun --service=my-service mydatabase.db -m metadata.json \n If you just want to set the title, license or source information you can do that directly using extra options to datasette publish : \n datasette publish cloudrun mydatabase.db --service=my-service \\\n --title=\"Title of my database\" \\\n --source=\"Where the data originated\" \\\n --source_url=\"http://www.example.com/\" \n You can also specify plugins you would like to install. For example, if you want to include the datasette-vega visualization plugin you can use the following: \n datasette publish cloudrun mydatabase.db --service=my-service --install=datasette-vega \n If a plugin has any Secret configuration values you can use the --plugin-secret option to set those secrets at publish time. For example, using Heroku with datasette-auth-github you might run the following command: \n datasette publish heroku my_database.db \\\n --name my-heroku-app-demo \\\n --install=datasette-auth-github \\\n --plugin-secret datasette-auth-github client_id your_client_id \\\n --plugin-secret datasette-auth-github client_secret your_client_secret", "sections_fts": 663, "rank": null} {"rowid": 580, "title": "Publishing to Fly", "content": "Fly is a competitively priced Docker-compatible hosting platform that supports running applications in globally distributed data centers close to your end users. You can deploy Datasette instances to Fly using the datasette-publish-fly plugin. \n pip install datasette-publish-fly\ndatasette publish fly mydatabase.db --app=\"my-app\" \n Consult the datasette-publish-fly README for more details.", "sections_fts": 663, "rank": null} {"rowid": 579, "title": "Publishing to Vercel", "content": "Vercel - previously known as Zeit Now - provides a layer over AWS Lambda to allow for quick, scale-to-zero deployment. You can deploy Datasette instances to Vercel using the datasette-publish-vercel plugin. \n pip install datasette-publish-vercel\ndatasette publish vercel mydatabase.db --project my-database-project \n Not every feature is supported: consult the datasette-publish-vercel README for more details.", "sections_fts": 663, "rank": null} {"rowid": 578, "title": "Publishing to Heroku", "content": "To publish your data using Heroku , first create an account there and install and configure the Heroku CLI tool . \n You can publish one or more databases to Heroku using the following command: \n datasette publish heroku mydatabase.db \n This will output some details about the new deployment, including a URL like this one: \n https://limitless-reef-88278.herokuapp.com/ deployed to Heroku \n You can specify a custom app name by passing -n my-app-name to the publish command. This will also allow you to overwrite an existing app. \n Rather than deploying directly you can use the --generate-dir option to output the files that would be deployed to a directory: \n datasette publish heroku mydatabase.db --generate-dir=/tmp/deploy-this-to-heroku \n See datasette publish heroku for the full list of options for this command.", "sections_fts": 663, "rank": null} {"rowid": 577, "title": "Publishing to Google Cloud Run", "content": "Google Cloud Run allows you to publish data in a scale-to-zero environment, so your application will start running when the first request is received and will shut down again when traffic ceases. This means you only pay for time spent serving traffic. \n \n Cloud Run is a great option for inexpensively hosting small, low traffic projects - but costs can add up for projects that serve a lot of requests. \n Be particularly careful if your project has tables with large numbers of rows. Search engine crawlers that index a page for every row could result in a high bill. \n The datasette-block-robots plugin can be used to request search engine crawlers omit crawling your site, which can help avoid this issue. \n \n You will first need to install and configure the Google Cloud CLI tools by following these instructions . \n You can then publish one or more SQLite database files to Google Cloud Run using the following command: \n datasette publish cloudrun mydatabase.db --service=my-database \n A Cloud Run service is a single hosted application. The service name you specify will be used as part of the Cloud Run URL. If you deploy to a service name that you have used in the past your new deployment will replace the previous one. \n If you omit the --service option you will be asked to pick a service name interactively during the deploy. \n You may need to interact with prompts from the tool. Many of the prompts ask for values that can be set as properties for the Google Cloud SDK if you want to avoid the prompts. \n For example, the default region for the deployed instance can be set using the command: \n gcloud config set run/region us-central1 \n You should replace us-central1 with your desired region . Alternately, you can specify the region by setting the CLOUDSDK_RUN_REGION environment variable. \n Once it has finished it will output a URL like this one: \n Service [my-service] revision [my-service-00001] has been deployed\nand is serving traffic at https://my-service-j7hipcg4aq-uc.a.run.app \n Cloud Run provides a URL on the .run.app domain, but you can also point your own domain or subdomain at your Cloud Run service - see mapping custom domains in the Cloud Run documentation for details. \n See datasette publish cloudrun for the full list of options for this command.", "sections_fts": 663, "rank": null} {"rowid": 576, "title": "datasette publish", "content": "Once you have created a SQLite database (e.g. using csvs-to-sqlite ) you can deploy it to a hosting account using a single command. \n You will need a hosting account with Heroku or Google Cloud . Once you have created your account you will need to install and configure the heroku or gcloud command-line tools.", "sections_fts": 663, "rank": null} {"rowid": 575, "title": "Publishing data", "content": "Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them.", "sections_fts": 663, "rank": null} {"rowid": 574, "title": "/-/messages", "content": "The debug tool at /-/messages can be used to set flash messages to try out that feature. See .add_message(request, message, type=datasette.INFO) for details of this feature.", "sections_fts": 663, "rank": null} {"rowid": 573, "title": "/-/actor", "content": "Shows the currently authenticated actor. Useful for debugging Datasette authentication plugins. \n {\n \"actor\": {\n \"id\": 1,\n \"username\": \"some-user\"\n }\n}", "sections_fts": 663, "rank": null} {"rowid": 572, "title": "/-/threads", "content": "Shows details of threads and asyncio tasks. Threads example : \n {\n \"num_threads\": 2,\n \"threads\": [\n {\n \"daemon\": false,\n \"ident\": 4759197120,\n \"name\": \"MainThread\"\n },\n {\n \"daemon\": true,\n \"ident\": 123145319682048,\n \"name\": \"Thread-1\"\n },\n ],\n \"num_tasks\": 3,\n \"tasks\": [\n \" cb=[set.discard()]>\",\n \" wait_for=()]> cb=[run_until_complete..()]>\",\n \" wait_for=()]>>\"\n ]\n}", "sections_fts": 663, "rank": null} {"rowid": 571, "title": "/-/databases", "content": "Shows currently attached databases. Databases example : \n [\n {\n \"hash\": null,\n \"is_memory\": false,\n \"is_mutable\": true,\n \"name\": \"fixtures\",\n \"path\": \"fixtures.db\",\n \"size\": 225280\n }\n]", "sections_fts": 663, "rank": null} {"rowid": 570, "title": "/-/config", "content": "Shows the configuration for this instance of Datasette. This is generally the contents of the datasette.yaml or datasette.json file, which can include plugin configuration as well. Config example : \n {\n \"settings\": {\n \"template_debug\": true,\n \"trace_debug\": true,\n \"force_https_urls\": true\n }\n} \n Any keys that include the one of the following substrings in their names will be returned as redacted *** output, to help avoid accidentally leaking private configuration information: secret , key , password , token , hash , dsn .", "sections_fts": 663, "rank": null} {"rowid": 569, "title": "/-/settings", "content": "Shows the Settings for this instance of Datasette. Settings example : \n {\n \"default_facet_size\": 30,\n \"default_page_size\": 100,\n \"facet_suggest_time_limit_ms\": 50,\n \"facet_time_limit_ms\": 1000,\n \"max_returned_rows\": 1000,\n \"sql_time_limit_ms\": 1000\n}", "sections_fts": 663, "rank": null} {"rowid": 568, "title": "/-/plugins", "content": "Shows a list of currently installed plugins and their versions. Plugins example : \n [\n {\n \"name\": \"datasette_cluster_map\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.10\",\n \"hooks\": [\"extra_css_urls\", \"extra_js_urls\", \"extra_body_script\"]\n }\n] \n Add ?all=1 to include details of the default plugins baked into Datasette.", "sections_fts": 663, "rank": null} {"rowid": 567, "title": "/-/versions", "content": "Shows the version of Datasette, Python and SQLite. Versions example : \n {\n \"datasette\": {\n \"version\": \"0.60\"\n },\n \"python\": {\n \"full\": \"3.8.12 (default, Dec 21 2021, 10:45:09) \\n[GCC 10.2.1 20210110]\",\n \"version\": \"3.8.12\"\n },\n \"sqlite\": {\n \"extensions\": {\n \"json1\": null\n },\n \"fts_versions\": [\n \"FTS5\",\n \"FTS4\",\n \"FTS3\"\n ],\n \"compile_options\": [\n \"COMPILER=gcc-6.3.0 20170516\",\n \"ENABLE_FTS3\",\n \"ENABLE_FTS4\",\n \"ENABLE_FTS5\",\n \"ENABLE_JSON1\",\n \"ENABLE_RTREE\",\n \"THREADSAFE=1\"\n ],\n \"version\": \"3.37.0\"\n }\n}", "sections_fts": 663, "rank": null} {"rowid": 566, "title": "/-/metadata", "content": "Shows the contents of the metadata.json file that was passed to datasette serve , if any. Metadata example : \n {\n \"license\": \"CC Attribution 4.0 License\",\n \"license_url\": \"http://creativecommons.org/licenses/by/4.0/\",\n \"source\": \"fivethirtyeight/data on GitHub\",\n \"source_url\": \"https://github.com/fivethirtyeight/data\",\n \"title\": \"Five Thirty Eight\",\n \"databases\": {\n\n }\n}", "sections_fts": 663, "rank": null} {"rowid": 565, "title": "Introspection", "content": "Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. \n Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.", "sections_fts": 663, "rank": null} {"rowid": 564, "title": "Contents", "content": "Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions Configuration Configuration via the command-line datasette.yaml reference Settings Plugin configuration Permissions configuration Canned queries configuration Custom CSS and JavaScript The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect datasette create-token Pages and API endpoints Top-level index Database Hidden tables Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Default representation Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Enabling CORS The JSON write API Inserting rows Upserting rows Updating a row Deleting a row Creating a table Creating a table from example data Dropping tables Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the \"root\" actor Permissions How permissions are resolved Defining permissions with \"allow\" blocks The /-/allow-debug tool Access permissions in datasette.yaml Access to an instance Access to specific databases Access to specific tables and views Access to specific canned queries Controlling the ability to execute arbitrary SQL Other permissions in datasette.yaml API Tokens datasette create-token Checking permissions in plugins actor_matches_allow() The permissions debug tool The ds_actor cookie Including an expiry time The /-/logout page Built-in permissions view-instance view-database view-database-download view-table view-query insert-row delete-row update-row create-table alter-table drop-table execute-sql permissions-debug debug-menu Performance and caching Immutable mode Using \"datasette inspect\" HTTP caching datasette-hashed-urls CSV export URL parameters Streaming all records Binary data Linking to binary downloads Binary plugins Facets Facets in query strings Facets in metadata Suggested facets Speeding up facets with indexes Facet by JSON array Facet by date Full-text search The table page and table view API Advanced SQLite search queries Configuring full-text search for a table or view Searches using custom SQL Enabling full-text search for a SQLite table Configuring FTS using sqlite-utils Configuring FTS using csvs-to-sqlite Configuring FTS by hand FTS versions SpatiaLite Warning Installation Installing SpatiaLite on OS X Installing SpatiaLite on Linux Spatial indexing latitude/longitude columns Making use of a spatial index Importing shapefiles into SpatiaLite Importing GeoJSON polygons using Shapely Querying polygons using within() Metadata Per-database and per-table metadata Source, license and about Column descriptions Specifying units for a column Setting a default sort order Setting a custom page size Setting which columns can be used for sorting Specifying the label column for a table Hiding tables Metadata reference Top-level metadata Database-level metadata Table-level metadata Settings Using --setting Configuration directory mode Settings default_allow_sql default_page_size sql_time_limit_ms max_returned_rows max_insert_rows num_sql_threads allow_facet default_facet_size facet_time_limit_ms facet_suggest_time_limit_ms suggest_facets allow_download allow_signed_tokens max_signed_tokens_ttl default_cache_ttl cache_size_kb allow_csv_stream max_csv_mb truncate_cells_html force_https_urls template_debug trace_debug base_url Configuring the secret Using secrets with datasette publish Introspection /-/metadata /-/versions /-/plugins /-/settings /-/config /-/databases /-/threads /-/actor /-/messages Custom pages and templates CSS classes on the Serving static files Publishing static assets Custom templates Custom pages Path parameters for pages Custom headers and status codes Returning 404s Custom redirects Custom error pages Plugins Installing plugins One-off plugins using --plugins-dir Deploying plugins using datasette publish Controlling which plugins are loaded Seeing what plugins are installed Plugin configuration Secret configuration values Writing plugins Tracing plugin hooks Writing one-off plugins Starting an installable plugin using cookiecutter Packaging a plugin Static assets Custom templates Writing plugins that accept configuration Designing URLs for your plugin Building URLs within plugins Plugins that define new plugin hooks JavaScript plugins The datasette_init event datasetteManager JavaScript plugin objects makeAboveTablePanelConfigs() makeColumnActions(columnDetails) Selectors Plugin hooks prepare_connection(conn, database, datasette) prepare_jinja2_environment(env, datasette) Page extras extra_template_vars(template, database, table, columns, view_name, request, datasette) extra_css_urls(template, database, table, columns, view_name, request, datasette) extra_js_urls(template, database, table, columns, view_name, request, datasette) extra_body_script(template, database, table, columns, view_name, request, datasette) publish_subcommand(publish) render_cell(row, value, column, table, database, datasette, request) register_output_renderer(datasette) register_routes(datasette) register_commands(cli) register_facet_classes() register_permissions(datasette) asgi_wrapper(datasette) startup(datasette) canned_queries(datasette, database, actor) actor_from_request(datasette, request) actors_from_ids(datasette, actor_ids) jinja2_environment_from_request(datasette, request, env) filters_from_request(request, database, table, datasette) permission_allowed(datasette, actor, action, resource) register_magic_parameters(datasette) forbidden(datasette, request, message) handle_exception(datasette, request, exception) skip_csrf(datasette, scope) get_metadata(datasette, key, database, table) menu_links(datasette, actor, request) Action hooks table_actions(datasette, actor, database, table, request) view_actions(datasette, actor, database, view, request) query_actions(datasette, actor, database, query_name, request, sql, params) row_actions(datasette, actor, request, database, table, row) database_actions(datasette, actor, database, request) homepage_actions(datasette, actor, request) Template slots top_homepage(datasette, request) top_database(datasette, request, database) top_table(datasette, request, database, table) top_row(datasette, request, database, table, row) top_query(datasette, request, database, sql) top_canned_query(datasette, request, database, query_name) Event tracking track_event(datasette, event) register_events(datasette) Testing plugins Setting up a Datasette test instance Using datasette.client in tests Using pdb for errors thrown inside Datasette Using pytest fixtures Testing outbound HTTP calls with pytest-httpx Registering a plugin for the duration of a test Internals for plugins Request object The MultiParams class Response class Returning a response with .asgi_send(send) Setting cookies with response.set_cookie() Datasette class .databases .permissions .plugin_config(plugin_name, database=None, table=None) await .render_template(template, context=None, request=None) await .actors_from_ids(actor_ids) await .permission_allowed(actor, action, resource=None, default=...) await .ensure_permissions(actor, permissions) await .check_visibility(actor, action=None, resource=None, permissions=None) .create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None) .get_permission(name_or_abbr) .get_database(name) .get_internal_database() .add_database(db, name=None, route=None) .add_memory_database(name) .remove_database(name) await .track_event(event) .sign(value, namespace=\"default\") .unsign(value, namespace=\"default\") .add_message(request, message, type=datasette.INFO) .absolute_url(request, path) .setting(key) .resolve_database(request) .resolve_table(request) .resolve_row(request) datasette.client datasette.urls Database class Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None) db.hash await db.execute(sql, ...) Results await db.execute_fn(fn) await db.execute_write(sql, params=None, block=True) await db.execute_write_script(sql, block=True) await db.execute_write_many(sql, params_seq, block=True) await db.execute_write_fn(fn, block=True, transaction=True) await db.execute_isolated_fn(fn) db.close() Database introspection CSRF protection Datasette's internal database The datasette.utils module parse_metadata(content) await_me_maybe(value) derive_named_parameters(db, sql) Tilde encoding datasette.tracer Tracing child tasks Import shortcuts Events LoginEvent LogoutEvent CreateTokenEvent CreateTableEvent DropTableEvent AlterTableEvent InsertRowsEvent UpsertRowsEvent UpdateRowEvent DeleteRowEvent Contributing General guidelines Setting up a development environment Running the tests Using fixtures Debugging Code formatting Running Black blacken-docs Prettier Editing and building the documentation Running Cog Continuously deployed demo instances Release process Alpha and beta releases Releasing bug fixes from a branch Upgrading CodeMirror Changelog 1.0a13 (2024-03-12) 1.0a12 (2024-02-29) 1.0a11 (2024-02-19) 1.0a10 (2024-02-17) 1.0a9 (2024-02-16) Alter table support for create, insert, upsert and update Permissions fix for the upsert API Permission checks now consider opinions from every plugin Other changes 1.0a8 (2024-02-07) Configuration JavaScript plugins Plugin hooks Documentation Minor fixes 0.64.6 (2023-12-22) 0.64.5 (2023-10-08) 1.0a7 (2023-09-21) 0.64.4 (2023-09-21) 1.0a6 (2023-09-07) 1.0a5 (2023-08-29) 1.0a4 (2023-08-21) 1.0a3 (2023-08-09) Smaller changes 0.64.2 (2023-03-08) 0.64.1 (2023-01-11) 0.64 (2023-01-09) 0.63.3 (2022-12-17) 1.0a2 (2022-12-14) 1.0a1 (2022-12-01) 1.0a0 (2022-11-29) Signed API tokens Write API 0.63.2 (2022-11-18) 0.63.1 (2022-11-10) 0.63 (2022-10-27) Features Plugin hooks and internals Documentation 0.62 (2022-08-14) Features Plugin hooks Bug fixes Documentation 0.61.1 (2022-03-23) 0.61 (2022-03-23) 0.60.2 (2022-02-07) 0.60.1 (2022-01-20) 0.60 (2022-01-13) Plugins and internals Faceting Other small fixes 0.59.4 (2021-11-29) 0.59.3 (2021-11-20) 0.59.2 (2021-11-13) 0.59.1 (2021-10-24) 0.59 (2021-10-14) 0.58.1 (2021-07-16) 0.58 (2021-07-14) 0.57.1 (2021-06-08) 0.57 (2021-06-05) New features Bug fixes and other improvements 0.56.1 (2021-06-05) 0.56 (2021-03-28) 0.55 (2021-02-18) 0.54.1 (2021-02-02) 0.54 (2021-01-25) The _internal database Named in-memory database support JavaScript modules Code formatting with Black and Prettier Other changes 0.53 (2020-12-10) 0.52.5 (2020-12-09) 0.52.4 (2020-12-05) 0.52.3 (2020-12-03) 0.52.2 (2020-12-02) 0.52.1 (2020-11-29) 0.52 (2020-11-28) 0.51.1 (2020-10-31) 0.51 (2020-10-31) New visual design Plugins can now add links within Datasette Binary data URL building Running Datasette behind a proxy Smaller changes 0.50.2 (2020-10-09) 0.50.1 (2020-10-09) 0.50 (2020-10-09) 0.49.1 (2020-09-15) 0.49 (2020-09-14) 0.48 (2020-08-16) 0.47.3 (2020-08-15) 0.47.2 (2020-08-12) 0.47.1 (2020-08-11) 0.47 (2020-08-11) 0.46 (2020-08-09) 0.45 (2020-07-01) Magic parameters for canned queries Log out Better plugin documentation New plugin hooks Smaller changes 0.44 (2020-06-11) Authentication Permissions Writable canned queries Flash messages Signed values and secrets CSRF protection Cookie methods register_routes() plugin hooks Smaller changes The road to Datasette 1.0 0.43 (2020-05-28) 0.42 (2020-05-08) 0.41 (2020-05-06) 0.40 (2020-04-21) 0.39 (2020-03-24) 0.38 (2020-03-08) 0.37.1 (2020-03-02) 0.37 (2020-02-25) 0.36 (2020-02-21) 0.35 (2020-02-04) 0.34 (2020-01-29) 0.33 (2019-12-22) 0.32 (2019-11-14) 0.31.2 (2019-11-13) 0.31.1 (2019-11-12) 0.31 (2019-11-11) 0.30.2 (2019-11-02) 0.30.1 (2019-10-30) 0.30 (2019-10-18) 0.29.3 (2019-09-02) 0.29.2 (2019-07-13) 0.29.1 (2019-07-11) 0.29 (2019-07-07) ASGI New plugin hook: asgi_wrapper New plugin hook: extra_template_vars Secret plugin configuration options Facet by date Easier custom templates for table rows ?_through= for joins through many-to-many tables Small changes 0.28 (2019-05-19) Supporting databases that change Faceting improvements, and faceting plugins datasette publish cloudrun register_output_renderer plugins Medium changes Small changes 0.27.1 (2019-05-09) 0.27 (2019-01-31) 0.26.1 (2019-01-10) 0.26 (2019-01-02) 0.25.2 (2018-12-16) 0.25.1 (2018-11-04) 0.25 (2018-09-19) 0.24 (2018-07-23) 0.23.2 (2018-07-07) 0.23.1 (2018-06-21) 0.23 (2018-06-18) CSV export Foreign key expansions New configuration settings Control HTTP caching with ?_ttl= Improved support for SpatiaLite latest.datasette.io Miscellaneous 0.22.1 (2018-05-23) 0.22 (2018-05-20) 0.21 (2018-05-05) 0.20 (2018-04-20) 0.19 (2018-04-16) 0.18 (2018-04-14) 0.17 (2018-04-13) 0.16 (2018-04-13) 0.15 (2018-04-09) 0.14 (2017-12-09) 0.13 (2017-11-24) 0.12 (2017-11-16) 0.11 (2017-11-14) 0.10 (2017-11-14) 0.9 (2017-11-13) 0.8 (2017-11-13)", "sections_fts": 663, "rank": null} {"rowid": 563, "title": "Datasette", "content": "An open source multi-tool for exploring and publishing data \n Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API. \n Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is part of a wider ecosystem of tools and plugins dedicated to making working with structured data as productive as possible. \n Explore a demo , watch a presentation about the project or Try Datasette without installing anything using Glitch . \n Interested in learning Datasette? Start with the official tutorials . \n Support questions, feedback? Join the Datasette Discord .", "sections_fts": 663, "rank": null} {"rowid": 562, "title": "Selectors", "content": "These are available on the selectors property of the datasetteManager object. \n const DOM_SELECTORS = {\n /** Should have one match */\n jsonExportLink: \".export-links a[href*=json]\",\n\n /** Event listeners that go outside of the main table, e.g. existing scroll listener */\n tableWrapper: \".table-wrapper\",\n table: \"table.rows-and-columns\",\n aboveTablePanel: \".above-table-panel\",\n\n // These could have multiple matches\n /** Used for selecting table headers. Use makeColumnActions if you want to add menu items. */\n tableHeaders: `table.rows-and-columns th`,\n\n /** Used to add \"where\" clauses to query using direct manipulation */\n filterRows: \".filter-row\",\n /** Used to show top available enum values for a column (\"facets\") */\n facetResults: \".facet-results [data-column]\",\n};", "sections_fts": 663, "rank": null} {"rowid": 561, "title": "makeColumnActions(columnDetails)", "content": "This method, if present, will be called when Datasette is rendering the cog action menu icons that appear at the top of the table view. By default these include options like \"Sort ascending/descending\" and \"Facet by this\", but plugins can return additional actions to be included in this menu. \n The method will be called with a columnDetails object with the following keys: \n \n \n columnName - string \n \n The name of the column \n \n \n \n columnNotNull - boolean \n \n True if the column is defined as NOT NULL \n \n \n \n columnType - string \n \n The SQLite data type of the column \n \n \n \n isPk - boolean \n \n True if the column is part of the primary key \n \n \n \n It should return a JavaScript array of objects each with a label and onClick property: \n \n \n label - string \n \n The human-readable label for the action \n \n \n \n onClick(evt) - function \n \n A function that will be called when the action is clicked \n \n \n \n The evt object passed to the onClick is the standard browser event object that triggered the click. \n This example plugin adds two menu items - one to copy the column name to the clipboard and another that displays the column metadata in an alert() window: \n document.addEventListener('datasette_init', function(ev) {\n ev.detail.registerPlugin('column-name-plugin', {\n version: 0.1,\n makeColumnActions: (columnDetails) => {\n return [\n {\n label: 'Copy column to clipboard',\n onClick: async (evt) => {\n await navigator.clipboard.writeText(columnDetails.columnName)\n }\n },\n {\n label: 'Alert column metadata',\n onClick: () => alert(JSON.stringify(columnDetails, null, 2))\n }\n ];\n }\n });\n});", "sections_fts": 663, "rank": null} {"rowid": 560, "title": "makeAboveTablePanelConfigs()", "content": "This method should return a JavaScript array of objects defining additional panels to be added to the top of the table page. Each object should have the following: \n \n \n id - string \n \n A unique string ID for the panel, for example map-panel \n \n \n \n label - string \n \n A human-readable label for the panel \n \n \n \n render(node) - function \n \n A function that will be called with a DOM node to render the panel into \n \n \n \n This example shows how a plugin might define a single panel: \n document.addEventListener('datasette_init', function(ev) {\n ev.detail.registerPlugin('panel-plugin', {\n version: 0.1,\n makeAboveTablePanelConfigs: () => {\n return [\n {\n id: 'first-panel',\n label: 'First panel',\n render: node => {\n node.innerHTML = '

My custom panel

This is a custom panel that I added using a JavaScript plugin

';\n }\n }\n ]\n }\n });\n}); \n When a page with a table loads, all registered plugins that implement makeAboveTablePanelConfigs() will be called and panels they return will be added to the top of the table page.", "sections_fts": 663, "rank": null} {"rowid": 559, "title": "JavaScript plugin objects", "content": "JavaScript plugins are blocks of code that can be registered with Datasette using the registerPlugin() method on the datasetteManager object. \n The implementation object passed to this method should include a version key defining the plugin version, and one or more of the following named functions providing the implementation of the plugin:", "sections_fts": 663, "rank": null} {"rowid": 558, "title": "datasetteManager", "content": "The datasetteManager object \n \n \n VERSION - string \n \n The version of Datasette \n \n \n \n plugins - Map() \n \n A Map of currently loaded plugin names to plugin implementations \n \n \n \n registerPlugin(name, implementation) \n \n Call this to register a plugin, passing its name and implementation \n \n \n \n selectors - object \n \n An object providing named aliases to useful CSS selectors, listed below", "sections_fts": 663, "rank": null} {"rowid": 557, "title": "The datasette_init event", "content": "Datasette emits a custom event called datasette_init when the page is loaded. This event is dispatched on the document object, and includes a detail object with a reference to the datasetteManager object. \n Your JavaScript code can listen out for this event using document.addEventListener() like this: \n document.addEventListener(\"datasette_init\", function (evt) {\n const manager = evt.detail;\n console.log(\"Datasette version:\", manager.VERSION);\n});", "sections_fts": 663, "rank": null} {"rowid": 556, "title": "JavaScript plugins", "content": "Datasette can run custom JavaScript in several different ways: \n \n \n Datasette plugins written in Python can use the extra_js_urls() or extra_body_script() plugin hooks to inject JavaScript into a page \n \n \n Datasette instances with custom templates can include additional JavaScript in those templates \n \n \n The extra_js_urls key in datasette.yaml can be used to include extra JavaScript \n \n \n There are no limitations on what this JavaScript can do. It is executed directly by the browser, so it can manipulate the DOM, fetch additional data and do anything else that JavaScript is capable of. \n \n Custom JavaScript has security implications, especially for authenticated Datasette instances where the JavaScript might run in the context of the authenticated user. It's important to carefully review any JavaScript you run in your Datasette instance.", "sections_fts": 663, "rank": null} {"rowid": 555, "title": "debug-menu", "content": "Controls if the various debug pages are displayed in the navigation menu. \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 554, "title": "permissions-debug", "content": "Actor is allowed to view the /-/permissions debug page. \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 553, "title": "execute-sql", "content": "Actor is allowed to run arbitrary SQL queries against a specific database, e.g. https://latest.datasette.io/fixtures?sql=select+100 \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow . See also the default_allow_sql setting .", "sections_fts": 663, "rank": null} {"rowid": 552, "title": "drop-table", "content": "Actor is allowed to drop a database table. \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 551, "title": "alter-table", "content": "Actor is allowed to alter a database table. \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 550, "title": "create-table", "content": "Actor is allowed to create a database table. \n \n \n resource - string \n \n The name of the database \n \n \n \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 549, "title": "update-row", "content": "Actor is allowed to update rows in a table. \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 548, "title": "delete-row", "content": "Actor is allowed to delete rows from a table. \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 547, "title": "insert-row", "content": "Actor is allowed to insert rows into a table. \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default deny .", "sections_fts": 663, "rank": null} {"rowid": 546, "title": "view-query", "content": "Actor is allowed to view (and execute) a canned query page, e.g. https://latest.datasette.io/fixtures/pragma_cache_size - this includes executing Writable canned queries . \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the canned query \n \n \n \n Default allow .", "sections_fts": 663, "rank": null} {"rowid": 545, "title": "view-table", "content": "Actor is allowed to view a table (or view) page, e.g. https://latest.datasette.io/fixtures/complex_foreign_keys \n \n \n resource - tuple: (string, string) \n \n The name of the database, then the name of the table \n \n \n \n Default allow .", "sections_fts": 663, "rank": null} {"rowid": 544, "title": "view-database-download", "content": "Actor is allowed to download a database, e.g. https://latest.datasette.io/fixtures.db \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow .", "sections_fts": 663, "rank": null} {"rowid": 543, "title": "view-database", "content": "Actor is allowed to view a database page, e.g. https://latest.datasette.io/fixtures \n \n \n resource - string \n \n The name of the database \n \n \n \n Default allow .", "sections_fts": 663, "rank": null} {"rowid": 542, "title": "view-instance", "content": "Top level permission - Actor is allowed to view any pages within this instance, starting at https://latest.datasette.io/ \n Default allow .", "sections_fts": 663, "rank": null} {"rowid": 541, "title": "Built-in permissions", "content": "This section lists all of the permission checks that are carried out by Datasette core, along with the resource if it was passed.", "sections_fts": 663, "rank": null} {"rowid": 540, "title": "The /-/logout page", "content": "The page at /-/logout provides the ability to log out of a ds_actor cookie authentication session.", "sections_fts": 663, "rank": null} {"rowid": 539, "title": "Including an expiry time", "content": "ds_actor cookies can optionally include a signed expiry timestamp, after which the cookies will no longer be valid. Authentication plugins may chose to use this mechanism to limit the lifetime of the cookie. For example, if a plugin implements single-sign-on against another source it may decide to set short-lived cookies so that if the user is removed from the SSO system their existing Datasette cookies will stop working shortly afterwards. \n To include an expiry, add a \"e\" key to the cookie value containing a base62-encoded integer representing the timestamp when the cookie should expire. For example, here's how to set a cookie that expires after 24 hours: \n import time\nfrom datasette.utils import baseconv\n\nexpires_at = int(time.time()) + (24 * 60 * 60)\n\nresponse = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign(\n {\n \"a\": {\"id\": \"cleopaws\"},\n \"e\": baseconv.base62.encode(expires_at),\n },\n \"actor\",\n ),\n) \n The resulting cookie will encode data that looks something like this: \n {\n \"a\": {\n \"id\": \"cleopaws\"\n },\n \"e\": \"1jjSji\"\n}", "sections_fts": 663, "rank": null} {"rowid": 538, "title": "The ds_actor cookie", "content": "Datasette includes a default authentication plugin which looks for a signed ds_actor cookie containing a JSON actor dictionary. This is how the root actor mechanism works. \n Authentication plugins can set signed ds_actor cookies themselves like so: \n response = Response.redirect(\"/\")\nresponse.set_cookie(\n \"ds_actor\",\n datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n) \n Note that you need to pass \"actor\" as the namespace to .sign(value, namespace=\"default\") . \n The shape of data encoded in the cookie is as follows: \n {\n \"a\": {... actor ...}\n}", "sections_fts": 663, "rank": null} {"rowid": 537, "title": "The permissions debug tool", "content": "The debug tool at /-/permissions is only available to the authenticated root user (or any actor granted the permissions-debug action). \n It shows the thirty most recent permission checks that have been carried out by the Datasette instance. \n It also provides an interface for running hypothetical permission checks against a hypothetical actor. This is a useful way of confirming that your configured permissions work in the way you expect. \n This is designed to help administrators and plugin authors understand exactly how permission checks are being carried out, in order to effectively configure Datasette's permission system.", "sections_fts": 663, "rank": null} {"rowid": 536, "title": "actor_matches_allow()", "content": "Plugins that wish to implement this same \"allow\" block permissions scheme can take advantage of the datasette.utils.actor_matches_allow(actor, allow) function: \n from datasette.utils import actor_matches_allow\n\nactor_matches_allow({\"id\": \"root\"}, {\"id\": \"*\"})\n# returns True \n The currently authenticated actor is made available to plugins as request.actor .", "sections_fts": 663, "rank": null} {"rowid": 535, "title": "Checking permissions in plugins", "content": "Datasette plugins can check if an actor has permission to perform an action using the datasette.permission_allowed(...) method. \n Datasette core performs a number of permission checks, documented below . Plugins can implement the permission_allowed(datasette, actor, action, resource) plugin hook to participate in decisions about whether an actor should be able to perform a specified action.", "sections_fts": 663, "rank": null} {"rowid": 534, "title": "Restricting the actions that a token can perform", "content": "Tokens created using datasette create-token ACTOR_ID will inherit all of the permissions of the actor that they are associated with. \n You can pass additional options to create tokens that are restricted to a subset of that actor's permissions. \n To restrict the token to just specific permissions against all available databases, use the --all option: \n datasette create-token root --all insert-row --all update-row \n This option can be passed as many times as you like. In the above example the token will only be allowed to insert and update rows. \n You can also restrict permissions such that they can only be used within specific databases: \n datasette create-token root --database mydatabase insert-row \n The resulting token will only be able to insert rows, and only to tables in the mydatabase database. \n Finally, you can restrict permissions to individual resources - tables, SQL views and named queries - within a specific database: \n datasette create-token root --resource mydatabase mytable insert-row \n These options have short versions: -a for --all , -d for --database and -r for --resource . \n You can add --debug to see a JSON representation of the token that has been created. Here's a full example: \n datasette create-token root \\\n --secret mysecret \\\n --all view-instance \\\n --all view-table \\\n --database docs view-query \\\n --resource docs documents insert-row \\\n --resource docs documents update-row \\\n --debug \n This example outputs the following: \n dstok_.eJxFizEKgDAMRe_y5w4qYrFXERGxDkVsMI0uxbubdjFL8l_ez1jhwEQCA6Fjjxp90qtkuHawzdjYrh8MFobLxZ_wBH0_gtnAF-hpS5VfmF8D_lnd97lHqUJgLd6sls4H1qwlhA.nH_7RecYHj5qSzvjhMU95iy0Xlc\n\nDecoded:\n\n{\n \"a\": \"root\",\n \"token\": \"dstok\",\n \"t\": 1670907246,\n \"_r\": {\n \"a\": [\n \"vi\",\n \"vt\"\n ],\n \"d\": {\n \"docs\": [\n \"vq\"\n ]\n },\n \"r\": {\n \"docs\": {\n \"documents\": [\n \"ir\",\n \"ur\"\n ]\n }\n }\n }\n}", "sections_fts": 663, "rank": null} {"rowid": 533, "title": "datasette create-token", "content": "You can also create tokens on the command line using the datasette create-token command. \n This command takes one required argument - the ID of the actor to be associated with the created token. \n You can specify a -e/--expires-after option in seconds. If omitted, the token will never expire. \n The command will sign the token using the DATASETTE_SECRET environment variable, if available. You can also pass the secret using the --secret option. \n This means you can run the command locally to create tokens for use with a deployed Datasette instance, provided you know that instance's secret. \n To create a token for the root actor that will expire in one hour: \n datasette create-token root --expires-after 3600 \n To create a token that never expires using a specific secret: \n datasette create-token root --secret my-secret-goes-here", "sections_fts": 663, "rank": null} {"rowid": 532, "title": "API Tokens", "content": "Datasette includes a default mechanism for generating API tokens that can be used to authenticate requests. \n Authenticated users can create new API tokens using a form on the /-/create-token page. \n Tokens created in this way can be further restricted to only allow access to specific actions, or to limit those actions to specific databases, tables or queries. \n Created tokens can then be passed in the Authorization: Bearer $token header of HTTP requests to Datasette. \n A token created by a user will include that user's \"id\" in the token payload, so any permissions granted to that user based on their ID can be made available to the token as well. \n When one of these a token accompanies a request, the actor for that request will have the following shape: \n {\n \"id\": \"user_id\",\n \"token\": \"dstok\",\n \"token_expires\": 1667717426\n} \n The \"id\" field duplicates the ID of the actor who first created the token. \n The \"token\" field identifies that this actor was authenticated using a Datasette signed token ( dstok ). \n The \"token_expires\" field, if present, indicates that the token will expire after that integer timestamp. \n The /-/create-token page cannot be accessed by actors that are authenticated with a \"token\": \"some-value\" property. This is to prevent API tokens from being used to create more tokens. \n Datasette plugins that implement their own form of API token authentication should follow this convention. \n You can disable the signed token feature entirely using the allow_signed_tokens setting.", "sections_fts": 663, "rank": null} {"rowid": 531, "title": "Other permissions in ", "content": "For all other permissions, you can use one or more \"permissions\" blocks in your datasette.yaml configuration file. \n To grant access to the permissions debug tool to all signed in users, you can grant permissions-debug to any actor with an id matching the wildcard * by adding this a the root of your configuration: \n [[[cog\nconfig_example(cog, \"\"\"\n permissions:\n debug-menu:\n id: '*'\n\"\"\") \n ]]] \n [[[end]]] \n To grant create-table to the user with id of editor for the docs database: \n [[[cog\nconfig_example(cog, \"\"\"\n databases:\n docs:\n permissions:\n create-table:\n id: editor\n\"\"\") \n ]]] \n [[[end]]] \n And for insert-row against the reports table in that docs database: \n [[[cog\nconfig_example(cog, \"\"\"\n databases:\n docs:\n tables:\n reports:\n permissions:\n insert-row:\n id: editor\n\"\"\") \n ]]] \n [[[end]]] \n The permissions debug tool can be useful for helping test permissions that you have configured in this way.", "sections_fts": 663, "rank": null} {"rowid": 530, "title": "Controlling the ability to execute arbitrary SQL", "content": "Datasette defaults to allowing any site visitor to execute their own custom SQL queries, for example using the form on the database page or by appending a ?_where= parameter to the table page like this . \n Access to this ability is controlled by the execute-sql permission. \n The easiest way to disable arbitrary SQL queries is using the default_allow_sql setting when you first start Datasette running. \n You can alternatively use an \"allow_sql\" block to control who is allowed to execute arbitrary SQL queries. \n To prevent any user from executing arbitrary SQL queries, use this: \n [[[cog\nconfig_example(cog, \"\"\"\n allow_sql: false\n\"\"\") \n ]]] \n [[[end]]] \n To enable just the root user to execute SQL for all databases in your instance, use the following: \n [[[cog\nconfig_example(cog, \"\"\"\n allow_sql:\n id: root\n\"\"\") \n ]]] \n [[[end]]] \n To limit this ability for just one specific database, use this: \n [[[cog\nconfig_example(cog, \"\"\"\n databases:\n mydatabase:\n allow_sql:\n id: root\n\"\"\") \n ]]] \n [[[end]]]", "sections_fts": 663, "rank": null} {"rowid": 529, "title": "Access to specific canned queries", "content": "Canned queries allow you to configure named SQL queries in your datasette.yaml that can be executed by users. These queries can be set up to both read and write to the database, so controlling who can execute them can be important. \n To limit access to the add_name canned query in your dogs.db database to just the root user : \n [[[cog\nconfig_example(cog, \"\"\"\n databases:\n dogs:\n queries:\n add_name:\n sql: INSERT INTO names (name) VALUES (:name)\n write: true\n allow:\n id:\n - root\n\"\"\") \n ]]] \n [[[end]]]", "sections_fts": 663, "rank": null} {"rowid": 528, "title": "Access to specific tables and views", "content": "To limit access to the users table in your bakery.db database: \n [[[cog\nconfig_example(cog, \"\"\"\n databases:\n bakery:\n tables:\n users:\n allow:\n id: '*'\n\"\"\") \n ]]] \n [[[end]]] \n This works for SQL views as well - you can list their names in the \"tables\" block above in the same way as regular tables. \n \n Restricting access to tables and views in this way will NOT prevent users from querying them using arbitrary SQL queries, like this for example. \n If you are restricting access to specific tables you should also use the \"allow_sql\" block to prevent users from bypassing the limit with their own SQL queries - see Controlling the ability to execute arbitrary SQL .", "sections_fts": 663, "rank": null} {"rowid": 527, "title": "Access to specific databases", "content": "To limit access to a specific private.db database to just authenticated users, use the \"allow\" block like this: \n [[[cog\nconfig_example(cog, \"\"\"\n databases:\n private:\n allow:\n id: \"*\"\n\"\"\") \n ]]] \n [[[end]]]", "sections_fts": 663, "rank": null} {"rowid": 526, "title": "Access to an instance", "content": "Here's how to restrict access to your entire Datasette instance to just the \"id\": \"root\" user: \n [[[cog\nfrom metadata_doc import config_example\nconfig_example(cog, \"\"\"\n title: My private Datasette instance\n allow:\n id: root\n \"\"\") \n ]]] \n [[[end]]] \n To deny access to all users, you can use \"allow\": false : \n [[[cog\nconfig_example(cog, \"\"\"\n title: My entirely inaccessible instance\n allow: false\n\"\"\") \n ]]] \n [[[end]]] \n One reason to do this is if you are using a Datasette plugin - such as datasette-permissions-sql - to control permissions instead.", "sections_fts": 663, "rank": null} {"rowid": 525, "title": "Access permissions in ", "content": "There are two ways to configure permissions using datasette.yaml (or datasette.json ). \n For simple visibility permissions you can use \"allow\" blocks in the root, database, table and query sections. \n For other permissions you can use a \"permissions\" block, described in the next section . \n You can limit who is allowed to view different parts of your Datasette instance using \"allow\" keys in your Configuration . \n You can control the following: \n \n \n Access to the entire Datasette instance \n \n \n Access to specific databases \n \n \n Access to specific tables and views \n \n \n Access to specific Canned queries \n \n \n If a user cannot access a specific database, they will not be able to access tables, views or queries within that database. If a user cannot access the instance they will not be able to access any of the databases, tables, views or queries.", "sections_fts": 663, "rank": null} {"rowid": 524, "title": "The /-/allow-debug tool", "content": "The /-/allow-debug tool lets you try out different \"action\" blocks against different \"actor\" JSON objects. You can try that out here: https://latest.datasette.io/-/allow-debug", "sections_fts": 663, "rank": null} {"rowid": 523, "title": "Defining permissions with \"allow\" blocks", "content": "The standard way to define permissions in Datasette is to use an \"allow\" block in the datasette.yaml file . This is a JSON document describing which actors are allowed to perform a permission. \n The most basic form of allow block is this ( allow demo , deny demo ): \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow:\n id: root\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n This will match any actors with an \"id\" property of \"root\" - for example, an actor that looks like this: \n {\n \"id\": \"root\",\n \"name\": \"Root User\"\n} \n An allow block can specify \"deny all\" using false ( demo ): \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow: false\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n An \"allow\" of true allows all access ( demo ): \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow: true\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n Allow keys can provide a list of values. These will match any actor that has any of those values ( allow demo , deny demo ): \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow:\n id:\n - simon\n - cleopaws\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n This will match any actor with an \"id\" of either \"simon\" or \"cleopaws\" . \n Actors can have properties that feature a list of values. These will be matched against the list of values in an allow block. Consider the following actor: \n {\n \"id\": \"simon\",\n \"roles\": [\"staff\", \"developer\"]\n} \n This allow block will provide access to any actor that has \"developer\" as one of their roles ( allow demo , deny demo ): \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow:\n roles:\n - developer\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n Note that \"roles\" is not a concept that is baked into Datasette - it's a convention that plugins can choose to implement and act on. \n If you want to provide access to any actor with a value for a specific key, use \"*\" . For example, to match any logged-in user specify the following ( allow demo , deny demo ): \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow:\n id: \"*\"\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n You can specify that only unauthenticated actors (from anonymous HTTP requests) should be allowed access using the special \"unauthenticated\": true key in an allow block ( allow demo , deny demo ): \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow:\n unauthenticated: true\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n Allow keys act as an \"or\" mechanism. An actor will be able to execute the query if any of their JSON properties match any of the values in the corresponding lists in the allow block. The following block will allow users with either a role of \"ops\" OR users who have an id of \"simon\" or \"cleopaws\" : \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n allow:\n id:\n - simon\n - cleopaws\n role: ops\n \"\"\").strip(),\n \"YAML\", \"JSON\"\n ) \n ]]] \n [[[end]]] \n Demo for cleopaws , demo for ops role , demo for an actor matching neither rule .", "sections_fts": 663, "rank": null} {"rowid": 522, "title": "How permissions are resolved", "content": "The datasette.permission_allowed(actor, action, resource=None, default=...) method is called to check if an actor is allowed to perform a specific action. \n This method asks every plugin that implements the permission_allowed(datasette, actor, action, resource) hook if the actor is allowed to perform the action. \n Each plugin can return True to indicate that the actor is allowed to perform the action, False if they are not allowed and None if the plugin has no opinion on the matter. \n False acts as a veto - if any plugin returns False then the permission check is denied. Otherwise, if any plugin returns True then the permission check is allowed. \n The resource argument can be used to specify a specific resource that the action is being performed against. Some permissions, such as view-instance , do not involve a resource. Others such as view-database have a resource that is a string naming the database. Permissions that take both a database name and the name of a table, view or canned query within that database use a resource that is a tuple of two strings, (database_name, resource_name) . \n Plugins that implement the permission_allowed() hook can decide if they are going to consider the provided resource or not.", "sections_fts": 663, "rank": null} {"rowid": 521, "title": "Permissions", "content": "Datasette has an extensive permissions system built-in, which can be further extended and customized by plugins. \n The key question the permissions system answers is this: \n \n Is this actor allowed to perform this action , optionally against this particular resource ? \n \n Actors are described above . \n An action is a string describing the action the actor would like to perform. A full list is provided below - examples include view-table and execute-sql . \n A resource is the item the actor wishes to interact with - for example a specific database or table. Some actions, such as permissions-debug , are not associated with a particular resource. \n Datasette's built-in view permissions ( view-database , view-table etc) default to allow - unless you configure additional permission rules unauthenticated users will be allowed to access content. \n Permissions with potentially harmful effects should default to deny . Plugin authors should account for this when designing new plugins - for example, the datasette-upload-csvs plugin defaults to deny so that installations don't accidentally allow unauthenticated users to create new tables by uploading a CSV file.", "sections_fts": 663, "rank": null} {"rowid": 520, "title": "Using the \"root\" actor", "content": "Datasette currently leaves almost all forms of authentication to plugins - datasette-auth-github for example. \n The one exception is the \"root\" account, which you can sign into while using Datasette on your local machine. This provides access to a small number of debugging features. \n To sign in as root, start Datasette using the --root command-line option, like this: \n datasette --root \n http://127.0.0.1:8001/-/auth-token?token=786fc524e0199d70dc9a581d851f466244e114ca92f33aa3b42a139e9388daa7\nINFO: Started server process [25801]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\nINFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) \n The URL on the first line includes a one-use token which can be used to sign in as the \"root\" actor in your browser. Click on that link and then visit http://127.0.0.1:8001/-/actor to confirm that you are authenticated as an actor that looks like this: \n {\n \"id\": \"root\"\n}", "sections_fts": 663, "rank": null} {"rowid": 519, "title": "Actors", "content": "Through plugins, Datasette can support both authenticated users (with cookies) and authenticated API agents (via authentication tokens). The word \"actor\" is used to cover both of these cases. \n Every request to Datasette has an associated actor value, available in the code as request.actor . This can be None for unauthenticated requests, or a JSON compatible Python dictionary for authenticated users or API agents. \n The actor dictionary can be any shape - the design of that data structure is left up to the plugins. A useful convention is to include an \"id\" string, as demonstrated by the \"root\" actor below. \n Plugins can use the actor_from_request(datasette, request) hook to implement custom logic for authenticating an actor based on the incoming HTTP request.", "sections_fts": 663, "rank": null} {"rowid": 518, "title": "Authentication and permissions", "content": "Datasette doesn't require authentication by default. Any visitor to a Datasette instance can explore the full data and execute read-only SQL queries. \n Datasette's plugin system can be used to add many different styles of authentication, such as user accounts, single sign-on or API keys.", "sections_fts": 663, "rank": null} {"rowid": 517, "title": "Using Datasette on your own computer", "content": "First, follow the Installation instructions. Now you can run Datasette against a SQLite file on your computer using the following command: \n datasette path/to/database.db \n This will start a web server on port 8001 - visit http://localhost:8001/ \n to access the web interface. \n Add -o to open your browser automatically once Datasette has started: \n datasette path/to/database.db -o \n Use Chrome on OS X? You can run datasette against your browser history\n like so: \n datasette ~/Library/Application\\ Support/Google/Chrome/Default/History --nolock \n The --nolock option ignores any file locks. This is safe as Datasette will open the file in read-only mode. \n Now visiting http://localhost:8001/History/downloads will show you a web\n interface to browse your downloads data: \n \n \n \n http://localhost:8001/History/downloads.json will return that data as\n JSON: \n {\n \"database\": \"History\",\n \"columns\": [\n \"id\",\n \"current_path\",\n \"target_path\",\n \"start_time\",\n \"received_bytes\",\n \"total_bytes\",\n ...\n ],\n \"rows\": [\n [\n 1,\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n 13097290269022132,\n 626688,\n 0,\n ...\n ]\n ]\n} \n http://localhost:8001/History/downloads.json?_shape=objects will return that data as\n JSON in a more convenient format: \n {\n ...\n \"rows\": [\n {\n \"start_time\": 13097290269022132,\n \"interrupt_reason\": 0,\n \"hash\": \"\",\n \"id\": 1,\n \"site_url\": \"\",\n \"referrer\": \"https://www.dropbox.com/downloading?src=index\",\n ...\n }\n ]\n}", "sections_fts": 663, "rank": null} {"rowid": 516, "title": "Try Datasette without installing anything using Glitch", "content": "Glitch is a free online tool for building web apps directly from your web browser. You can use Glitch to try out Datasette without needing to install any software on your own computer. \n Here's a demo project on Glitch which you can use as the basis for your own experiments: \n glitch.com/~datasette-csvs \n Glitch allows you to \"remix\" any project to create your own copy and start editing it in your browser. You can remix the datasette-csvs project by clicking this button: \n \n Find a CSV file and drag it onto the Glitch file explorer panel - datasette-csvs will automatically convert it to a SQLite database (using sqlite-utils ) and allow you to start exploring it using Datasette. \n If your CSV file has a latitude and longitude column you can visualize it on a map by uncommenting the datasette-cluster-map line in the requirements.txt file using the Glitch file editor. \n Need some data? Try this Public Art Data for the city of Seattle - hit \"Export\" and select \"CSV\" to download it as a CSV file. \n For more on how this works, see Running Datasette on Glitch .", "sections_fts": 663, "rank": null} {"rowid": 515, "title": "Datasette in your browser with Datasette Lite", "content": "Datasette Lite is Datasette packaged using WebAssembly so that it runs entirely in your browser, no Python web application server required. \n You can pass a URL to a CSV, SQLite or raw SQL file directly to Datasette Lite to explore that data in your browser. \n This example link opens Datasette Lite and loads the SQL Murder Mystery example database from Northwestern University Knight Lab .", "sections_fts": 663, "rank": null} {"rowid": 514, "title": "Follow a tutorial", "content": "Datasette has several tutorials to help you get started with the tool. Try one of the following: \n \n \n Exploring a database with Datasette shows how to use the Datasette web interface to explore a new database. \n \n \n Learn SQL with Datasette introduces SQL, and shows how to use that query language to ask questions of your data. \n \n \n Cleaning data with sqlite-utils and Datasette guides you through using sqlite-utils to turn a CSV file into a database that you can explore using Datasette.", "sections_fts": 663, "rank": null} {"rowid": 513, "title": "Play with a live demo", "content": "The best way to experience Datasette for the first time is with a demo: \n \n \n global-power-plants.datasettes.com provides a searchable database of power plants around the world, using data from the World Resources Institude rendered using the datasette-cluster-map plugin. \n \n \n fivethirtyeight.datasettes.com shows Datasette running against over 400 datasets imported from the FiveThirtyEight GitHub repository .", "sections_fts": 663, "rank": null} {"rowid": 512, "title": "Getting started", "content": "", "sections_fts": 663, "rank": null} {"rowid": 511, "title": "Using secrets with datasette publish", "content": "The datasette publish and datasette package commands both generate a secret for you automatically when Datasette is deployed. \n This means that every time you deploy a new version of a Datasette project, a new secret will be generated. This will cause signed cookies to become invalid on every fresh deploy. \n You can fix this by creating a secret that will be used for multiple deploys and passing it using the --secret option: \n datasette publish cloudrun mydb.db --service=my-service --secret=cdb19e94283a20f9d42cca5", "sections_fts": 663, "rank": null} {"rowid": 510, "title": "Configuring the secret", "content": "Datasette uses a secret string to sign secure values such as cookies. \n If you do not provide a secret, Datasette will create one when it starts up. This secret will reset every time the Datasette server restarts though, so things like authentication cookies and API tokens will not stay valid between restarts. \n You can pass a secret to Datasette in two ways: with the --secret command-line option or by setting a DATASETTE_SECRET environment variable. \n datasette mydb.db --secret=SECRET_VALUE_HERE \n Or: \n export DATASETTE_SECRET=SECRET_VALUE_HERE\ndatasette mydb.db \n One way to generate a secure random secret is to use Python like this: \n python3 -c 'import secrets; print(secrets.token_hex(32))'\ncdb19e94283a20f9d42cca50c5a4871c0aa07392db308755d60a1a5b9bb0fa52 \n Plugin authors make use of this signing mechanism in their plugins using .sign(value, namespace=\"default\") and .unsign(value, namespace=\"default\") .", "sections_fts": 663, "rank": null} {"rowid": 509, "title": "base_url", "content": "If you are running Datasette behind a proxy, it may be useful to change the root path used for the Datasette instance. \n For example, if you are sending traffic from https://www.example.com/tools/datasette/ through to a proxied Datasette instance you may wish Datasette to use /tools/datasette/ as its root URL. \n You can do that like so: \n datasette mydatabase.db --setting base_url /tools/datasette/", "sections_fts": 663, "rank": null} {"rowid": 508, "title": "trace_debug", "content": "This setting enables appending ?_trace=1 to any page in order to see the SQL queries and other trace information that was used to generate that page. \n Enable it like this: \n datasette mydatabase.db --setting trace_debug 1 \n Some examples: \n \n \n https://latest.datasette.io/?_trace=1 \n \n \n https://latest.datasette.io/fixtures/roadside_attractions?_trace=1 \n \n \n See datasette.tracer for details on how to hook into this mechanism as a plugin author.", "sections_fts": 663, "rank": null} {"rowid": 507, "title": "template_debug", "content": "This setting enables template context debug mode, which is useful to help understand what variables are available to custom templates when you are writing them. \n Enable it like this: \n datasette mydatabase.db --setting template_debug 1 \n Now you can add ?_context=1 or &_context=1 to any Datasette page to see the context that was passed to that template. \n Some examples: \n \n \n https://latest.datasette.io/?_context=1 \n \n \n https://latest.datasette.io/fixtures?_context=1 \n \n \n https://latest.datasette.io/fixtures/roadside_attractions?_context=1", "sections_fts": 663, "rank": null} {"rowid": 506, "title": "force_https_urls", "content": "Forces self-referential URLs in the JSON output to always use the https:// \n protocol. This is useful for cases where the application itself is hosted using\n HTTP but is served to the outside world via a proxy that enables HTTPS. \n datasette mydatabase.db --setting force_https_urls 1", "sections_fts": 663, "rank": null} {"rowid": 505, "title": "truncate_cells_html", "content": "In the HTML table view, truncate any strings that are longer than this value.\n The full value will still be available in CSV, JSON and on the individual row\n HTML page. Set this to 0 to disable truncation. \n datasette mydatabase.db --setting truncate_cells_html 0", "sections_fts": 663, "rank": null} {"rowid": 504, "title": "max_csv_mb", "content": "The maximum size of CSV that can be exported, in megabytes. Defaults to 100MB.\n You can disable the limit entirely by settings this to 0: \n datasette mydatabase.db --setting max_csv_mb 0", "sections_fts": 663, "rank": null} {"rowid": 503, "title": "allow_csv_stream", "content": "Enables the CSV export feature where an entire table\n (potentially hundreds of thousands of rows) can be exported as a single CSV\n file. This is turned on by default - you can turn it off like this: \n datasette mydatabase.db --setting allow_csv_stream off", "sections_fts": 663, "rank": null} {"rowid": 502, "title": "cache_size_kb", "content": "Sets the amount of memory SQLite uses for its per-connection cache , in KB. \n datasette mydatabase.db --setting cache_size_kb 5000", "sections_fts": 663, "rank": null} {"rowid": 501, "title": "default_cache_ttl", "content": "Default HTTP caching max-age header in seconds, used for Cache-Control: max-age=X . Can be over-ridden on a per-request basis using the ?_ttl= query string parameter. Set this to 0 to disable HTTP caching entirely. Defaults to 5 seconds. \n datasette mydatabase.db --setting default_cache_ttl 60", "sections_fts": 663, "rank": null} {"rowid": 500, "title": "max_signed_tokens_ttl", "content": "Maximum allowed expiry time for signed API tokens created by users. \n Defaults to 0 which means no limit - tokens can be created that will never expire. \n Set this to a value in seconds to limit the maximum expiry time. For example, to set that limit to 24 hours you would use: \n datasette mydatabase.db --setting max_signed_tokens_ttl 86400 \n This setting is enforced when incoming tokens are processed.", "sections_fts": 663, "rank": null} {"rowid": 499, "title": "allow_signed_tokens", "content": "Should users be able to create signed API tokens to access Datasette? \n This is turned on by default. Use the following to turn it off: \n datasette mydatabase.db --setting allow_signed_tokens off \n Turning this setting off will disable the /-/create-token page, described here . It will also cause any incoming Authorization: Bearer dstok_... API tokens to be ignored.", "sections_fts": 663, "rank": null} {"rowid": 498, "title": "allow_download", "content": "Should users be able to download the original SQLite database using a link on the database index page? This is turned on by default. However, databases can only be downloaded if they are served in immutable mode and not in-memory. If downloading is unavailable for either of these reasons, the download link is hidden even if allow_download is on. To disable database downloads, use the following: \n datasette mydatabase.db --setting allow_download off", "sections_fts": 663, "rank": null} {"rowid": 497, "title": "suggest_facets", "content": "Should Datasette calculate suggested facets? On by default, turn this off like so: \n datasette mydatabase.db --setting suggest_facets off", "sections_fts": 663, "rank": null} {"rowid": 496, "title": "facet_suggest_time_limit_ms", "content": "When Datasette calculates suggested facets it needs to run a SQL query for every column in your table. The default for this time limit is 50ms to account for the fact that it needs to run once for every column. If the time limit is exceeded the column will not be suggested as a facet. \n You can increase this time limit like so: \n datasette mydatabase.db --setting facet_suggest_time_limit_ms 500", "sections_fts": 663, "rank": null} {"rowid": 495, "title": "facet_time_limit_ms", "content": "This is the time limit Datasette allows for calculating a facet, which defaults to 200ms: \n datasette mydatabase.db --setting facet_time_limit_ms 1000", "sections_fts": 663, "rank": null} {"rowid": 494, "title": "default_facet_size", "content": "The default number of unique rows returned by Facets is 30. You can customize it like this: \n datasette mydatabase.db --setting default_facet_size 50", "sections_fts": 663, "rank": null} {"rowid": 493, "title": "allow_facet", "content": "Allow users to specify columns they would like to facet on using the ?_facet=COLNAME URL parameter to the table view. \n This is enabled by default. If disabled, facets will still be displayed if they have been specifically enabled in metadata.json configuration for the table. \n Here's how to disable this feature: \n datasette mydatabase.db --setting allow_facet off", "sections_fts": 663, "rank": null} {"rowid": 492, "title": "num_sql_threads", "content": "Maximum number of threads in the thread pool Datasette uses to execute SQLite queries. Defaults to 3. \n datasette mydatabase.db --setting num_sql_threads 10 \n Setting this to 0 turns off threaded SQL queries entirely - useful for environments that do not support threading such as Pyodide .", "sections_fts": 663, "rank": null} {"rowid": 491, "title": "max_insert_rows", "content": "Maximum rows that can be inserted at a time using the bulk insert API, see Inserting rows . Defaults to 100. \n You can increase or decrease this limit like so: \n datasette mydatabase.db --setting max_insert_rows 1000", "sections_fts": 663, "rank": null} {"rowid": 490, "title": "max_returned_rows", "content": "Datasette returns a maximum of 1,000 rows of data at a time. If you execute a query that returns more than 1,000 rows, Datasette will return the first 1,000 and include a warning that the result set has been truncated. You can use OFFSET/LIMIT or other methods in your SQL to implement pagination if you need to return more than 1,000 rows. \n You can increase or decrease this limit like so: \n datasette mydatabase.db --setting max_returned_rows 2000", "sections_fts": 663, "rank": null} {"rowid": 489, "title": "sql_time_limit_ms", "content": "By default, queries have a time limit of one second. If a query takes longer than this to run Datasette will terminate the query and return an error. \n If this time limit is too short for you, you can customize it using the sql_time_limit_ms limit - for example, to increase it to 3.5 seconds: \n datasette mydatabase.db --setting sql_time_limit_ms 3500 \n You can optionally set a lower time limit for an individual query using the ?_timelimit=100 query string argument: \n /my-database/my-table?qSpecies=44&_timelimit=100 \n This would set the time limit to 100ms for that specific query. This feature is useful if you are working with databases of unknown size and complexity - a query that might make perfect sense for a smaller table could take too long to execute on a table with millions of rows. By setting custom time limits you can execute queries \"optimistically\" - e.g. give me an exact count of rows matching this query but only if it takes less than 100ms to calculate.", "sections_fts": 663, "rank": null} {"rowid": 488, "title": "default_page_size", "content": "The default number of rows returned by the table page. You can over-ride this on a per-page basis using the ?_size=80 query string parameter, provided you do not specify a value higher than the max_returned_rows setting. You can set this default using --setting like so: \n datasette mydatabase.db --setting default_page_size 50", "sections_fts": 663, "rank": null} {"rowid": 487, "title": "default_allow_sql", "content": "Should users be able to execute arbitrary SQL queries by default? \n Setting this to off causes permission checks for execute-sql to fail by default. \n datasette mydatabase.db --setting default_allow_sql off \n Another way to achieve this is to add \"allow_sql\": false to your datasette.yaml file, as described in Controlling the ability to execute arbitrary SQL . This setting offers a more convenient way to do this.", "sections_fts": 663, "rank": null} {"rowid": 486, "title": "Settings", "content": "The following options can be set using --setting name value , or by storing them in the settings.json file for use with Configuration directory mode .", "sections_fts": 663, "rank": null} {"rowid": 485, "title": "Configuration directory mode", "content": "Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose: \n datasette one.db two.db \\\n --metadata=metadata.json \\\n --template-dir=templates/ \\\n --plugins-dir=plugins \\\n --static css:css \n As an alternative to this, you can run Datasette in configuration directory mode. Create a directory with the following structure: \n # In a directory called my-app:\nmy-app/one.db\nmy-app/two.db\nmy-app/datasette.yaml\nmy-app/metadata.json\nmy-app/templates/index.html\nmy-app/plugins/my_plugin.py\nmy-app/static/my.css \n Now start Datasette by providing the path to that directory: \n datasette my-app/ \n Datasette will detect the files in that directory and automatically configure itself using them. It will serve all *.db files that it finds, will load metadata.json if it exists, and will load the templates , plugins and static folders if they are present. \n The files that can be included in this directory are as follows. All are optional. \n \n \n *.db (or *.sqlite3 or *.sqlite ) - SQLite database files that will be served by Datasette \n \n \n datasette.yaml - Configuration for the Datasette instance \n \n \n metadata.json - Metadata for those databases - metadata.yaml or metadata.yml can be used as well \n \n \n inspect-data.json - the result of running datasette inspect *.db --inspect-file=inspect-data.json from the configuration directory - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running \n \n \n templates/ - a directory containing Custom templates \n \n \n plugins/ - a directory containing plugins, see Writing one-off plugins \n \n \n static/ - a directory containing static files - these will be served from /static/filename.txt , see Serving static files", "sections_fts": 663, "rank": null} {"rowid": 484, "title": "Using --setting", "content": "Datasette supports a number of settings. These can be set using the --setting name value option to datasette serve . \n You can set multiple settings at once like this: \n datasette mydatabase.db \\\n --setting default_page_size 50 \\\n --setting sql_time_limit_ms 3500 \\\n --setting max_returned_rows 2000 \n Settings can also be specified in the database.yaml configuration file .", "sections_fts": 663, "rank": null} {"rowid": 483, "title": "Settings", "content": "", "sections_fts": 663, "rank": null}