sections
609 rows sorted by ref
This data as json, CSV (advanced)
Suggested facets: content, references, breadcrumbs (array)
ref 577 ✖
- id1 24
- id2 7
- id3 3
- authentication 2
- allowdebugview 1
- alter-table-support-for-create-insert-upsert-and-update 1
- apache-proxy-configuration 1
- asgi 1
- authentication-actor 1
- authentication-actor-matches-allow 1
- authentication-cli-create-token 1
- authentication-cli-create-token-restrict 1
- authentication-ds-actor 1
- authentication-ds-actor-expiry 1
- authentication-permissions 1
- authentication-permissions-allow 1
- authentication-permissions-config 1
- authentication-permissions-database 1
- authentication-permissions-execute-sql 1
- authentication-permissions-explained 1
- authentication-permissions-instance 1
- authentication-permissions-other 1
- authentication-permissions-query 1
- authentication-permissions-table 1
- authentication-root 1
- better-plugin-documentation 1
- binary 1
- binary-data 1
- binary-linking 1
- binary-plugins 1
- bug-fixes 1
- bug-fixes-and-other-improvements 1
- canned-queries-json-api 1
- canned-queries-magic-parameters 1
- canned-queries-named-parameters 1
- canned-queries-options 1
- canned-queries-writable 1
- cli-datasette-get 1
- cli-datasette-serve-env 1
- cli-help-create-token-help 1
- cli-help-help 1
- cli-help-inspect-help 1
- cli-help-install-help 1
- cli-help-package-help 1
- cli-help-plugins-help 1
- cli-help-publish-cloudrun-help 1
- cli-help-publish-help 1
- cli-help-publish-heroku-help 1
- cli-help-serve-help 1
- cli-help-serve-help-settings 1
- cli-help-uninstall-help 1
- cli-package 1
- cli-publish 1
- code-formatting-with-black-and-prettier 1
- column-filter-arguments 1
- config-dir 1
- configuration 1
- configuration-cli 1
- configuration-reference 1
- configuration-reference-canned-queries 1
- configuration-reference-css-js 1
- configuration-reference-permissions 1
- configuration-reference-plugins 1
- configuration-reference-settings 1
- configuring-fts-by-hand 1
- configuring-fts-using-csvs-to-sqlite 1
- configuring-fts-using-sqlite-utils 1
- contents 1
- contributing-alpha-beta 1
- contributing-bug-fix-branch 1
- contributing-continuous-deployment 1
- contributing-debugging 1
- contributing-documentation 1
- contributing-documentation-cog 1
- contributing-formatting 1
- contributing-formatting-black 1
- contributing-formatting-blacken-docs 1
- contributing-formatting-prettier 1
- contributing-release 1
- contributing-running-tests 1
- contributing-upgrading-codemirror 1
- contributing-using-fixtures 1
- control-http-caching-with-ttl 1
- cookie-methods 1
- createtokenview 1
- csrf-protection 1
- css-classes-on-the-body 1
- csv-export 1
- csv-export-url-parameters 1
- custom-pages-404 1
- custom-pages-errors 1
- custom-pages-headers 1
- custom-pages-parameters 1
- custom-pages-redirects 1
- customization 1
- customization-css 1
- customization-custom-templates 1
- customization-static-files 1
- database-close 1
- database-constructor 1
- database-execute 1
- database-execute-fn 1
- database-execute-isolated-fn 1
- database-execute-write 1
- database-execute-write-fn 1
- database-execute-write-many 1
- database-execute-write-script 1
- database-hash 1
- database-level-metadata 1
- database-results 1
- databaseview 1
- databaseview-hidden 1
- datasette 1
- datasette-absolute-url 1
- datasette-actors-from-ids 1
- datasette-add-database 1
- datasette-add-memory-database 1
- datasette-add-message 1
- datasette-check-visibility 1
- datasette-create-token 1
- datasette-databases 1
- datasette-ensure-permissions 1
- datasette-get-column-metadata 1
- datasette-get-database 1
- datasette-get-database-metadata 1
- datasette-get-instance-metadata 1
- datasette-get-permission 1
- datasette-get-resource-metadata 1
- datasette-get-set-metadata 1
- datasette-permission-allowed 1
- datasette-permissions 1
- datasette-plugin-config 1
- datasette-remove-database 1
- datasette-render-template 1
- datasette-resolve-database 1
- datasette-resolve-row 1
- datasette-resolve-table 1
- datasette-set-column-metadata 1
- datasette-set-database-metadata 1
- datasette-set-instance-metadata 1
- datasette-set-resource-metadata 1
- datasette-setting 1
- datasette-sign 1
- datasette-track-event 1
- datasette-unsign 1
- deploying 1
- deploying-buildpacks 1
- deploying-fundamentals 1
- deploying-openrc 1
- deploying-plugins-using-datasette-publish 1
- deploying-proxy 1
- deploying-systemd 1
- devenvironment 1
- documentation 1
- dogsheep 1
- ecosystem 1
- expand-foreign-keys 1
- facet-by-date 1
- faceting 1
- facets-in-query-strings 1
- facets-metadata 1
- features 1
- flash-messages 1
- foreign-key-expansions 1
- fragment 1
- full-text-search-advanced-queries 1
- full-text-search-custom-sql 1
- full-text-search-enabling 1
- full-text-search-fts-versions 1
- full-text-search-table-or-view 1
- full-text-search-table-view-api 1
- general-guidelines 1
- getting-started 1
- getting-started-datasette-lite 1
- getting-started-demo 1
- getting-started-glitch 1
- getting-started-tutorial 1
- getting-started-your-computer 1
- hide-sql 1
- http-caching 1
- id10 1
- id106 1
- id11 1
- id110 1
- id117 1
- id118 1
- id12 1
- id121 1
- id126 1
- id13 1
- id139 1
- id14 1
- id148 1
- id15 1
- id155 1
- id157 1
- id158 1
- id16 1
- id164 1
- id17 1
- id177 1
- id18 1
- id186 1
- id19 1
- id20 1
- id201 1
- id21 1
- id211 1
- id212 1
- id214 1
- id215 1
- id22 1
- id23 1
- id24 1
- id25 1
- id26 1
- id27 1
- id28 1
- id29 1
- id30 1
- id31 1
- id32 1
- id33 1
- id34 1
- id35 1
- id36 1
- id37 1
- id38 1
- id39 1
- id4 1
- id40 1
- id41 1
- id42 1
- id43 1
- id44 1
- id45 1
- id46 1
- id47 1
- id48 1
- id49 1
- id5 1
- id50 1
- id51 1
- id52 1
- id53 1
- id54 1
- id55 1
- id56 1
- id57 1
- id58 1
- id59 1
- id6 1
- id60 1
- id61 1
- id62 1
- id63 1
- id64 1
- id65 1
- id66 1
- id67 1
- id68 1
- id69 1
- id7 1
- id70 1
- id71 1
- id72 1
- id73 1
- id74 1
- id75 1
- id76 1
- id77 1
- id78 1
- id79 1
- id8 1
- id80 1
- id81 1
- id82 1
- id83 1
- id84 1
- id85 1
- id86 1
- id87 1
- id88 1
- id89 1
- id9 1
- id90 1
- id91 1
- id92 1
- id93 1
- id94 1
- id95 1
- id96 1
- id97 1
- importing-geojson-polygons-using-shapely 1
- importing-shapefiles-into-spatialite 1
- improved-support-for-spatialite 1
- indexview 1
- installation-advanced 1
- installation-basic 1
- installation-datasette-desktop 1
- installation-docker 1
- installation-extensions 1
- installation-homebrew 1
- installation-pip 1
- installation-pipx 1
- installing-plugins 1
- installing-plugins-using-pipx 1
- installing-spatialite-on-linux 1
- installing-spatialite-on-os-x 1
- internals 1
- internals-csrf 1
- internals-database 1
- internals-database-introspection 1
- internals-datasette 1
- internals-datasette-client 1
- internals-datasette-urls 1
- internals-internal 1
- internals-internal-schema 1
- internals-multiparams 1
- internals-request 1
- internals-response 1
- internals-response-asgi-send 1
- internals-response-set-cookie 1
- internals-shortcuts 1
- internals-tilde-encoding 1
- internals-tracer 1
- internals-tracer-trace-child-tasks 1
- internals-utils 1
- internals-utils-await-me-maybe 1
- internals-utils-named-parameters 1
- internals-utils-parse-metadata 1
- javascript-datasette-init 1
- javascript-datasette-manager 1
- javascript-datasette-manager-selectors 1
- javascript-modules 1
- javascript-plugins 1
- javascript-plugins-makeabovetablepanelconfigs 1
- javascript-plugins-makecolumnactions 1
- json-api-cors 1
- json-api-default 1
- json-api-discover-alternate 1
- json-api-pagination 1
- json-api-shapes 1
- json-api-special 1
- json-api-table-arguments 1
- json-api-write 1
- jsondataview-actor 1
- jsondataview-config 1
- jsondataview-databases 1
- jsondataview-metadata 1
- jsondataview-plugins 1
- jsondataview-settings 1
- jsondataview-threads 1
- jsondataview-versions 1
- label-columns 1
- latest-datasette-io 1
- loading-spatialite 1
- log-out 1
- logoutview 1
- magic-parameters-for-canned-queries 1
- making-use-of-a-spatial-index 1
- messagesdebugview 1
- metadata-column-descriptions 1
- metadata-default-sort 1
- metadata-fallback-has-been-removed 1
- metadata-hiding-tables 1
- metadata-page-size 1
- metadata-sortable-columns 1
- metadata-source-license-about 1
- minor-fixes 1
- miscellaneous 1
- named-in-memory-database-support 1
- new-configuration-settings 1
- new-features 1
- new-plugin-hook-asgi-wrapper 1
- new-plugin-hook-extra-template-vars 1
- new-plugin-hooks 1
- new-visual-design 1
- nginx-proxy-configuration 1
- one-off-plugins-using-plugins-dir 1
- other-changes 1
- other-small-fixes 1
- pages 1
- per-database-and-per-table-metadata 1
- performance 1
- performance-hashed-urls 1
- performance-immutable-mode 1
- performance-inspect 1
- permission-checks-now-consider-opinions-from-every-plugin 1
- permissions 1
- permissions-alter-table 1
- permissions-create-table 1
- permissions-debug-menu 1
- permissions-delete-row 1
- permissions-drop-table 1
- permissions-execute-sql 1
- permissions-fix-for-the-upsert-api 1
- permissions-insert-row 1
- permissions-permissions-debug 1
- permissions-plugins 1
- permissions-update-row 1
- permissions-view-database 1
- permissions-view-database-download 1
- permissions-view-instance 1
- permissions-view-query 1
- permissions-view-table 1
- permissionsdebugview 1
- plugin-actions 1
- plugin-asgi-wrapper 1
- plugin-event-tracking 1
- plugin-hook-actor-from-request 1
- plugin-hook-actors-from-ids 1
- plugin-hook-canned-queries 1
- plugin-hook-database-actions 1
- plugin-hook-extra-body-script 1
- plugin-hook-extra-css-urls 1
- plugin-hook-extra-js-urls 1
- plugin-hook-extra-template-vars 1
- plugin-hook-filters-from-request 1
- plugin-hook-forbidden 1
- plugin-hook-handle-exception 1
- plugin-hook-homepage-actions 1
- plugin-hook-jinja2-environment-from-request 1
- plugin-hook-menu-links 1
- plugin-hook-permission-allowed 1
- plugin-hook-prepare-connection 1
- plugin-hook-prepare-jinja2-environment 1
- plugin-hook-publish-subcommand 1
- plugin-hook-query-actions 1
- plugin-hook-register-commands 1
- plugin-hook-register-events 1
- plugin-hook-register-magic-parameters 1
- plugin-hook-render-cell 1
- plugin-hook-row-actions 1
- plugin-hook-skip-csrf 1
- plugin-hook-slots 1
- plugin-hook-startup 1
- plugin-hook-table-actions 1
- plugin-hook-top-canned-query 1
- plugin-hook-top-database 1
- plugin-hook-top-homepage 1
- plugin-hook-top-query 1
- plugin-hook-top-row 1
- plugin-hook-top-table 1
- plugin-hook-track-event 1
- plugin-hook-view-actions 1
- plugin-hooks 1
- plugin-hooks-and-internals 1
- plugin-page-extras 1
- plugin-register-facet-classes 1
- plugin-register-output-renderer 1
- plugin-register-permissions 1
- plugin-register-routes 1
- plugins-and-internals 1
- plugins-can-now-add-links-within-datasette 1
- plugins-configuration 1
- plugins-configuration-secret 1
- plugins-datasette-load-plugins 1
- plugins-installed 1
- plugins-installing 1
- publish-cloud-run 1
- publish-custom-metadata-and-plugins 1
- publish-fly 1
- publish-heroku 1
- publish-vercel 1
- publishing 1
- publishing-static-assets 1
- querying-polygons-using-within 1
- queryview 1
- register-routes-plugin-hooks 1
- rowdeleteview 1
- rowupdateview 1
- rowview 1
- running-datasette-behind-a-proxy 1
- secret-plugin-configuration-options 1
- setting-allow-csv-stream 1
- setting-allow-download 1
- setting-allow-facet 1
- setting-allow-signed-tokens 1
- setting-base-url 1
- setting-cache-size-kb 1
- setting-default-allow-sql 1
- setting-default-cache-ttl 1
- setting-default-facet-size 1
- setting-default-page-size 1
- setting-facet-suggest-time-limit-ms 1
- setting-facet-time-limit-ms 1
- setting-force-https-urls 1
- setting-max-csv-mb 1
- setting-max-insert-rows 1
- setting-max-returned-rows 1
- setting-max-signed-tokens-ttl 1
- setting-num-sql-threads 1
- setting-publish-secrets 1
- setting-secret 1
- setting-sql-time-limit-ms 1
- setting-suggest-facets 1
- setting-template-debug 1
- setting-trace-debug 1
- setting-truncate-cells-html 1
- signed-api-tokens 1
- signed-values-and-secrets 1
- small-changes 1
- smaller-changes 1
- spatial-indexing-latitude-longitude-columns 1
- spatialite-installation 1
- spatialite-warning 1
- speeding-up-facets-with-indexes 1
- sql 1
- sql-parameters 1
- sql-views 1
- sqlite-utils 1
- streaming-all-records 1
- suggested-facets 1
- table-level-metadata 1
- tablecreateview 1
- tablecreateview-example 1
- tabledropview 1
- tableinsertview 1
- tableupsertview 1
- tableview 1
- testing-datasette-client 1
- testing-plugins-datasette-test-instance 1
- testing-plugins-fixtures 1
- testing-plugins-pdb 1
- testing-plugins-pytest-httpx 1
- testing-plugins-register-in-test 1
- the-internal-database 1
- the-road-to-datasette-1-0 1
- through-for-joins-through-many-to-many-tables 1
- top-level-metadata 1
- upgrade-guide-v1 1
- upgrade-guide-v1-metadata 1
- upgrade-guide-v1-metadata-json-removed 1
- upgrade-guide-v1-metadata-method-removed 1
- upgrade-guide-v1-metadata-removed 1
- upgrade-guide-v1-metadata-split 1
- upgrade-guide-v1-metadata-upgrade 1
- upgrade-guide-v1-sql-queries 1
- upgrading-packages-using-pipx 1
- url-building 1
- using-setting 1
- v0-28-databases-that-change 1
- v0-28-faceting 1
- v0-28-medium-changes 1
- v0-28-publish-cloudrun 1
- v0-28-register-output-renderer 1
- v0-29-medium-changes 1
- v1-0-a0 1
- v1-0-a1 1
- v1-0-a10 1
- v1-0-a11 1
- v1-0-a12 1
- v1-0-a13 1
- v1-0-a14 1
- v1-0-a15 1
- v1-0-a16 1
- v1-0-a2 1
- v1-0-a3 1
- v1-0-a4 1
- v1-0-a5 1
- v1-0-a6 1
- v1-0-a7 1
- v1-0-a8 1
- v1-0-a9 1
- writable-canned-queries 1
- write-api 1
- writing-plugins-building-urls 1
- writing-plugins-configuration 1
- writing-plugins-cookiecutter 1
- writing-plugins-custom-templates 1
- writing-plugins-designing-urls 1
- writing-plugins-extra-hooks 1
- writing-plugins-one-off 1
- writing-plugins-packaging 1
- writing-plugins-static-assets 1
- writing-plugins-tracing 1
page 32 ✖
- changelog 195
- internals 66
- plugin_hooks 45
- authentication 38
- settings 29
- json_api 19
- contributing 17
- cli-reference 15
- installation 13
- metadata 13
- sql_queries 13
- custom_templates 12
- writing_plugins 11
- full_text_search 10
- introspection 10
- spatialite 10
- upgrade_guide 10
- configuration 8
- deploying 8
- plugins 8
- publish 8
- facets 7
- javascript_plugins 7
- pages 7
- testing_plugins 7
- getting_started 6
- performance 5
- binary_data 3
- csv_export 3
- ecosystem 3
- index 2
- events 1
id | page | ref ▼ | title | content | breadcrumbs | references |
---|---|---|---|---|---|---|
authentication:allowdebugview | authentication | allowdebugview | The /-/allow-debug tool | The /-/allow-debug tool lets you try out different "action" blocks against different "actor" JSON objects. You can try that out here: https://latest.datasette.io/-/allow-debug | ["Authentication and permissions", "Permissions"] | [{"href": "https://latest.datasette.io/-/allow-debug", "label": "https://latest.datasette.io/-/allow-debug"}] |
changelog:alter-table-support-for-create-insert-upsert-and-update | changelog | alter-table-support-for-create-insert-upsert-and-update | Alter table support for create, insert, upsert and update | The JSON write API can now be used to apply simple alter table schema changes, provided the acting actor has the new alter-table permission. ( #2101 ) The only alter operation supported so far is adding new columns to an existing table. The /db/-/create API now adds new columns during large operations to create a table based on incoming example "rows" , in the case where one of the later rows includes columns that were not present in the earlier batches. This requires the create-table but not the alter-table permission. When /db/-/create is called with rows in a situation where the table may have been already created, an "alter": true key can be included to indicate that any missing columns from the new rows should be added to the table. This requires the alter-table permission. /db/table/-/insert and /db/table/-/upsert and /db/table/row-pks/-/update all now also accept "alter": true , depending on the alter-table permission. Operations that alter a table now fire the new alter-table event . | ["Changelog", "1.0a9 (2024-02-16)"] | [{"href": "https://github.com/simonw/datasette/issues/2101", "label": "#2101"}] |
deploying:apache-proxy-configuration | deploying | apache-proxy-configuration | Apache proxy configuration | For Apache , you can use the ProxyPass directive. First make sure the following lines are uncommented: LoadModule proxy_module lib/httpd/modules/mod_proxy.so LoadModule proxy_http_module lib/httpd/modules/mod_proxy_http.so Then add these directives to proxy traffic: ProxyPass /my-datasette/ http://127.0.0.1:8009/my-datasette/ ProxyPreserveHost On A live demo of Datasette running behind Apache using this proxy setup can be seen at datasette-apache-proxy-demo.datasette.io/prefix/ . The code for that demo can be found in the demos/apache-proxy directory. Using --uds you can use Unix domain sockets similar to the nginx example: ProxyPass /my-datasette/ unix:/tmp/datasette.sock|http://localhost/my-datasette/ The ProxyPreserveHost On directive ensures that the original Host: header from the incoming request is passed through to Datasette. Datasette needs this to correctly assemble links to other pages using the .absolute_url(request, path) method. | ["Deploying Datasette", "Running Datasette behind a proxy"] | [{"href": "https://httpd.apache.org/", "label": "Apache"}, {"href": "https://datasette-apache-proxy-demo.datasette.io/prefix/", "label": "datasette-apache-proxy-demo.datasette.io/prefix/"}, {"href": "https://github.com/simonw/datasette/tree/main/demos/apache-proxy", "label": "demos/apache-proxy"}, {"href": "https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost", "label": "ProxyPreserveHost On"}] |
changelog:asgi | changelog | asgi | ASGI | ASGI is the Asynchronous Server Gateway Interface standard. I've been wanting to convert Datasette into an ASGI application for over a year - Port Datasette to ASGI #272 tracks thirteen months of intermittent development - but with Datasette 0.29 the change is finally released. This also means Datasette now runs on top of Uvicorn and no longer depends on Sanic . I wrote about the significance of this change in Porting Datasette to ASGI, and Turtles all the way down . The most exciting consequence of this change is that Datasette plugins can now take advantage of the ASGI standard. | ["Changelog", "0.29 (2019-07-07)"] | [{"href": "https://asgi.readthedocs.io/", "label": "ASGI"}, {"href": "https://github.com/simonw/datasette/issues/272", "label": "Port Datasette to ASGI #272"}, {"href": "https://www.uvicorn.org/", "label": "Uvicorn"}, {"href": "https://github.com/huge-success/sanic", "label": "Sanic"}, {"href": "https://simonwillison.net/2019/Jun/23/datasette-asgi/", "label": "Porting Datasette to ASGI, and Turtles all the way down"}] |
changelog:authentication | changelog | authentication | Authentication | Prior to this release the Datasette ecosystem has treated authentication as exclusively the realm of plugins, most notably through datasette-auth-github . 0.44 introduces Authentication and permissions as core Datasette concepts ( #699 ). This enables different plugins to share responsibility for authenticating requests - you might have one plugin that handles user accounts and another one that allows automated access via API keys, for example. You'll need to install plugins if you want full user accounts, but default Datasette can now authenticate a single root user with the new --root command-line option, which outputs a one-time use URL to authenticate as a root actor ( #784 ): datasette fixtures.db --root http://127.0.0.1:8001/-/auth-token?token=5b632f8cd44b868df625f5a6e2185d88eea5b22237fd3cc8773f107cc4fd6477 INFO: Started server process [14973] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) Plugins can implement new ways of authenticating users using the new actor_from_request(datasette, request) hook. | ["Changelog", "0.44 (2020-06-11)"] | [{"href": "https://github.com/simonw/datasette-auth-github", "label": "datasette-auth-github"}, {"href": "https://github.com/simonw/datasette/issues/699", "label": "#699"}, {"href": "https://github.com/simonw/datasette/issues/784", "label": "#784"}] |
authentication:authentication | authentication | authentication | Authentication and permissions | Datasette doesn't require authentication by default. Any visitor to a Datasette instance can explore the full data and execute read-only SQL queries. Datasette's plugin system can be used to add many different styles of authentication, such as user accounts, single sign-on or API keys. | [] | [] |
authentication:authentication-actor | authentication | authentication-actor | Actors | Through plugins, Datasette can support both authenticated users (with cookies) and authenticated API agents (via authentication tokens). The word "actor" is used to cover both of these cases. Every request to Datasette has an associated actor value, available in the code as request.actor . This can be None for unauthenticated requests, or a JSON compatible Python dictionary for authenticated users or API agents. The actor dictionary can be any shape - the design of that data structure is left up to the plugins. A useful convention is to include an "id" string, as demonstrated by the "root" actor below. Plugins can use the actor_from_request(datasette, request) hook to implement custom logic for authenticating an actor based on the incoming HTTP request. | ["Authentication and permissions"] | [] |
authentication:authentication-actor-matches-allow | authentication | authentication-actor-matches-allow | actor_matches_allow() | Plugins that wish to implement this same "allow" block permissions scheme can take advantage of the datasette.utils.actor_matches_allow(actor, allow) function: from datasette.utils import actor_matches_allow actor_matches_allow({"id": "root"}, {"id": "*"}) # returns True The currently authenticated actor is made available to plugins as request.actor . | ["Authentication and permissions"] | [] |
authentication:authentication-cli-create-token | authentication | authentication-cli-create-token | datasette create-token | You can also create tokens on the command line using the datasette create-token command. This command takes one required argument - the ID of the actor to be associated with the created token. You can specify a -e/--expires-after option in seconds. If omitted, the token will never expire. The command will sign the token using the DATASETTE_SECRET environment variable, if available. You can also pass the secret using the --secret option. This means you can run the command locally to create tokens for use with a deployed Datasette instance, provided you know that instance's secret. To create a token for the root actor that will expire in one hour: datasette create-token root --expires-after 3600 To create a token that never expires using a specific secret: datasette create-token root --secret my-secret-goes-here | ["Authentication and permissions", "API Tokens"] | [] |
authentication:authentication-cli-create-token-restrict | authentication | authentication-cli-create-token-restrict | Restricting the actions that a token can perform | Tokens created using datasette create-token ACTOR_ID will inherit all of the permissions of the actor that they are associated with. You can pass additional options to create tokens that are restricted to a subset of that actor's permissions. To restrict the token to just specific permissions against all available databases, use the --all option: datasette create-token root --all insert-row --all update-row This option can be passed as many times as you like. In the above example the token will only be allowed to insert and update rows. You can also restrict permissions such that they can only be used within specific databases: datasette create-token root --database mydatabase insert-row The resulting token will only be able to insert rows, and only to tables in the mydatabase database. Finally, you can restrict permissions to individual resources - tables, SQL views and named queries - within a specific database: datasette create-token root --resource mydatabase mytable insert-row These options have short versions: -a for --all , -d for --database and -r for --resource . You can add --debug to see a JSON representation of the token that has been created. Here's a full example: datasette create-token root \ --secret mysecret \ --all view-instance \ --all view-table \ --database docs view-query \ --resource docs documents insert-row \ --resource docs documents update-row \ --debug This example outputs the following: dstok_.eJxFizEKgDAMRe_y5w4qYrFXERGxDkVsMI0uxbubdjFL8l_ez1jhwEQCA6Fjjxp90qtkuHawzdjYrh8MFobLxZ_wBH0_gtnAF-hpS5VfmF8D_lnd97lHqUJgLd6sls4H1qwlhA.nH_7RecYHj5qSzvjhMU95iy0Xlc Decoded: { "a": "root", "token": "dstok", "t": 1670907246, "_r": { "a… | ["Authentication and permissions", "API Tokens", "datasette create-token"] | [] |
authentication:authentication-ds-actor | authentication | authentication-ds-actor | The ds_actor cookie | Datasette includes a default authentication plugin which looks for a signed ds_actor cookie containing a JSON actor dictionary. This is how the root actor mechanism works. Authentication plugins can set signed ds_actor cookies themselves like so: response = Response.redirect("/") response.set_cookie( "ds_actor", datasette.sign({"a": {"id": "cleopaws"}}, "actor"), ) Note that you need to pass "actor" as the namespace to .sign(value, namespace="default") . The shape of data encoded in the cookie is as follows: { "a": {... actor ...} } | ["Authentication and permissions"] | [] |
authentication:authentication-ds-actor-expiry | authentication | authentication-ds-actor-expiry | Including an expiry time | ds_actor cookies can optionally include a signed expiry timestamp, after which the cookies will no longer be valid. Authentication plugins may chose to use this mechanism to limit the lifetime of the cookie. For example, if a plugin implements single-sign-on against another source it may decide to set short-lived cookies so that if the user is removed from the SSO system their existing Datasette cookies will stop working shortly afterwards. To include an expiry, add a "e" key to the cookie value containing a base62-encoded integer representing the timestamp when the cookie should expire. For example, here's how to set a cookie that expires after 24 hours: import time from datasette.utils import baseconv expires_at = int(time.time()) + (24 * 60 * 60) response = Response.redirect("/") response.set_cookie( "ds_actor", datasette.sign( { "a": {"id": "cleopaws"}, "e": baseconv.base62.encode(expires_at), }, "actor", ), ) The resulting cookie will encode data that looks something like this: { "a": { "id": "cleopaws" }, "e": "1jjSji" } | ["Authentication and permissions", "The ds_actor cookie"] | [] |
authentication:authentication-permissions | authentication | authentication-permissions | Permissions | Datasette has an extensive permissions system built-in, which can be further extended and customized by plugins. The key question the permissions system answers is this: Is this actor allowed to perform this action , optionally against this particular resource ? Actors are described above . An action is a string describing the action the actor would like to perform. A full list is provided below - examples include view-table and execute-sql . A resource is the item the actor wishes to interact with - for example a specific database or table. Some actions, such as permissions-debug , are not associated with a particular resource. Datasette's built-in view permissions ( view-database , view-table etc) default to allow - unless you configure additional permission rules unauthenticated users will be allowed to access content. Permissions with potentially harmful effects should default to deny . Plugin authors should account for this when designing new plugins - for example, the datasette-upload-csvs plugin defaults to deny so that installations don't accidentally allow unauthenticated users to create new tables by uploading a CSV file. | ["Authentication and permissions"] | [{"href": "https://github.com/simonw/datasette-upload-csvs", "label": "datasette-upload-csvs"}] |
authentication:authentication-permissions-allow | authentication | authentication-permissions-allow | Defining permissions with "allow" blocks | The standard way to define permissions in Datasette is to use an "allow" block in the datasette.yaml file . This is a JSON document describing which actors are allowed to perform a permission. The most basic form of allow block is this ( allow demo , deny demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: id: root """).strip(), "YAML", "JSON" ) ]]] [[[end]]] This will match any actors with an "id" property of "root" - for example, an actor that looks like this: { "id": "root", "name": "Root User" } An allow block can specify "deny all" using false ( demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: false """).strip(), "YAML", "JSON" ) ]]] [[[end]]] An "allow" of true allows all access ( demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: true """).strip(), "YAML", "JSON" ) ]]] [[[end]]] Allow keys can provide a list of values. These will match any actor that has any of those values ( allow demo , deny demo ): [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ allow: id: - simon - cleopaws """).strip(), "YAML", "JSON" ) ]]] [[[end]]] This will match any actor with an "id" of either "simon" or "cleopaws" . Actors can have properties that feature a list of values. These will be matched against the list of values in an allow block. Consider the following actor: { "id": "simon"… | ["Authentication and permissions", "Permissions"] | [{"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%22id%22%3A+%22root%22%7D&allow=%7B%0D%0A++++++++%22id%22%3A+%22root%22%0D%0A++++%7D", "label": "allow demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%22id%22%3A+%22trevor%22%7D&allow=%7B%0D%0A++++++++%22id%22%3A+%22root%22%0D%0A++++%7D", "label": "deny demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22id%22%3A+%22root%22%0D%0A%7D&allow=false", "label": "demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22id%22%3A+%22root%22%0D%0A%7D&allow=true", "label": "demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22id%22%3A+%22cleopaws%22%0D%0A%7D&allow=%7B%0D%0A++++%22id%22%3A+%5B%0D%0A++++++++%22simon%22%2C%0D%0A++++++++%22cleopaws%22%0D%0A++++%5D%0D%0A%7D", "label": "allow demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22id%22%3A+%22pancakes%22%0D%0A%7D&allow=%7B%0D%0A++++%22id%22%3A+%5B%0D%0A++++++++%22simon%22%2C%0D%0A++++++++%22cleopaws%22%0D%0A++++%5D%0D%0A%7D", "label": "deny demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22id%22%3A+%22simon%22%2C%0D%0A++++%22roles%22%3A+%5B%0D%0A++++++++%22staff%22%2C%0D%0A++++++++%22developer%22%0D%0A++++%5D%0D%0A%7D&allow=%7B%0D%0A++++%22roles%22%3A+%5B%0D%0A++++++++%22developer%22%0D%0A++++%5D%0D%0A%7D", "label": "allow demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22id%22%3A+%22cleopaws%22%2C%0D%0A++++%22roles%22%3A+%5B%22dog%22%5D%0D%0A%7D&allow=%7B%0D%0A++++%22roles%22%3A+%5B%0D%0A++++++++%22developer%22%0D%0A++++%5D%0D%0A%7D", "label": "deny demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22id%22%3A+%22simon%22%0D%0A%7D&allow=%7B%0D%0A++++%22id%22%3A+%22*%22%0D%0A%7D", "label": "allow demo"}, {"href": "https://latest.datasette.io/-/allow-debug?actor=%7B%0D%0A++++%22bot%22%3A+%22readme-bot%22%0D%0A%7D&allow=%7B%0D%0A++++%22id%22%3A+%22*%22%0D%0A%7D", "label": "deny demo"… |
authentication:authentication-permissions-config | authentication | authentication-permissions-config | Access permissions in | There are two ways to configure permissions using datasette.yaml (or datasette.json ). For simple visibility permissions you can use "allow" blocks in the root, database, table and query sections. For other permissions you can use a "permissions" block, described in the next section . You can limit who is allowed to view different parts of your Datasette instance using "allow" keys in your Configuration . You can control the following: Access to the entire Datasette instance Access to specific databases Access to specific tables and views Access to specific Canned queries If a user cannot access a specific database, they will not be able to access tables, views or queries within that database. If a user cannot access the instance they will not be able to access any of the databases, tables, views or queries. | ["Authentication and permissions"] | [] |
authentication:authentication-permissions-database | authentication | authentication-permissions-database | Access to specific databases | To limit access to a specific private.db database to just authenticated users, use the "allow" block like this: [[[cog config_example(cog, """ databases: private: allow: id: "*" """) ]]] [[[end]]] | ["Authentication and permissions", "Access permissions in "] | [] |
authentication:authentication-permissions-execute-sql | authentication | authentication-permissions-execute-sql | Controlling the ability to execute arbitrary SQL | Datasette defaults to allowing any site visitor to execute their own custom SQL queries, for example using the form on the database page or by appending a ?_where= parameter to the table page like this . Access to this ability is controlled by the execute-sql permission. The easiest way to disable arbitrary SQL queries is using the default_allow_sql setting when you first start Datasette running. You can alternatively use an "allow_sql" block to control who is allowed to execute arbitrary SQL queries. To prevent any user from executing arbitrary SQL queries, use this: [[[cog config_example(cog, """ allow_sql: false """) ]]] [[[end]]] To enable just the root user to execute SQL for all databases in your instance, use the following: [[[cog config_example(cog, """ allow_sql: id: root """) ]]] [[[end]]] To limit this ability for just one specific database, use this: [[[cog config_example(cog, """ databases: mydatabase: allow_sql: id: root """) ]]] [[[end]]] | ["Authentication and permissions", "Access permissions in "] | [{"href": "https://latest.datasette.io/fixtures", "label": "the database page"}, {"href": "https://latest.datasette.io/fixtures/facetable?_where=_city_id=1", "label": "like this"}] |
authentication:authentication-permissions-explained | authentication | authentication-permissions-explained | How permissions are resolved | The datasette.permission_allowed(actor, action, resource=None, default=...) method is called to check if an actor is allowed to perform a specific action. This method asks every plugin that implements the permission_allowed(datasette, actor, action, resource) hook if the actor is allowed to perform the action. Each plugin can return True to indicate that the actor is allowed to perform the action, False if they are not allowed and None if the plugin has no opinion on the matter. False acts as a veto - if any plugin returns False then the permission check is denied. Otherwise, if any plugin returns True then the permission check is allowed. The resource argument can be used to specify a specific resource that the action is being performed against. Some permissions, such as view-instance , do not involve a resource. Others such as view-database have a resource that is a string naming the database. Permissions that take both a database name and the name of a table, view or canned query within that database use a resource that is a tuple of two strings, (database_name, resource_name) . Plugins that implement the permission_allowed() hook can decide if they are going to consider the provided resource or not. | ["Authentication and permissions", "Permissions"] | [] |
authentication:authentication-permissions-instance | authentication | authentication-permissions-instance | Access to an instance | Here's how to restrict access to your entire Datasette instance to just the "id": "root" user: [[[cog from metadata_doc import config_example config_example(cog, """ title: My private Datasette instance allow: id: root """) ]]] [[[end]]] To deny access to all users, you can use "allow": false : [[[cog config_example(cog, """ title: My entirely inaccessible instance allow: false """) ]]] [[[end]]] One reason to do this is if you are using a Datasette plugin - such as datasette-permissions-sql - to control permissions instead. | ["Authentication and permissions", "Access permissions in "] | [{"href": "https://github.com/simonw/datasette-permissions-sql", "label": "datasette-permissions-sql"}] |
authentication:authentication-permissions-other | authentication | authentication-permissions-other | Other permissions in | For all other permissions, you can use one or more "permissions" blocks in your datasette.yaml configuration file. To grant access to the permissions debug tool to all signed in users, you can grant permissions-debug to any actor with an id matching the wildcard * by adding this a the root of your configuration: [[[cog config_example(cog, """ permissions: debug-menu: id: '*' """) ]]] [[[end]]] To grant create-table to the user with id of editor for the docs database: [[[cog config_example(cog, """ databases: docs: permissions: create-table: id: editor """) ]]] [[[end]]] And for insert-row against the reports table in that docs database: [[[cog config_example(cog, """ databases: docs: tables: reports: permissions: insert-row: id: editor """) ]]] [[[end]]] The permissions debug tool can be useful for helping test permissions that you have configured in this way. | ["Authentication and permissions"] | [] |
authentication:authentication-permissions-query | authentication | authentication-permissions-query | Access to specific canned queries | Canned queries allow you to configure named SQL queries in your datasette.yaml that can be executed by users. These queries can be set up to both read and write to the database, so controlling who can execute them can be important. To limit access to the add_name canned query in your dogs.db database to just the root user : [[[cog config_example(cog, """ databases: dogs: queries: add_name: sql: INSERT INTO names (name) VALUES (:name) write: true allow: id: - root """) ]]] [[[end]]] | ["Authentication and permissions", "Access permissions in "] | [] |
authentication:authentication-permissions-table | authentication | authentication-permissions-table | Access to specific tables and views | To limit access to the users table in your bakery.db database: [[[cog config_example(cog, """ databases: bakery: tables: users: allow: id: '*' """) ]]] [[[end]]] This works for SQL views as well - you can list their names in the "tables" block above in the same way as regular tables. Restricting access to tables and views in this way will NOT prevent users from querying them using arbitrary SQL queries, like this for example. If you are restricting access to specific tables you should also use the "allow_sql" block to prevent users from bypassing the limit with their own SQL queries - see Controlling the ability to execute arbitrary SQL . | ["Authentication and permissions", "Access permissions in "] | [{"href": "https://latest.datasette.io/fixtures?sql=select+*+from+facetable", "label": "like this"}] |
authentication:authentication-root | authentication | authentication-root | Using the "root" actor | Datasette currently leaves almost all forms of authentication to plugins - datasette-auth-github for example. The one exception is the "root" account, which you can sign into while using Datasette on your local machine. This provides access to a small number of debugging features. To sign in as root, start Datasette using the --root command-line option, like this: datasette --root http://127.0.0.1:8001/-/auth-token?token=786fc524e0199d70dc9a581d851f466244e114ca92f33aa3b42a139e9388daa7 INFO: Started server process [25801] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit) The URL on the first line includes a one-use token which can be used to sign in as the "root" actor in your browser. Click on that link and then visit http://127.0.0.1:8001/-/actor to confirm that you are authenticated as an actor that looks like this: { "id": "root" } | ["Authentication and permissions", "Actors"] | [{"href": "https://github.com/simonw/datasette-auth-github", "label": "datasette-auth-github"}] |
changelog:better-plugin-documentation | changelog | better-plugin-documentation | Better plugin documentation | The plugin documentation has been re-arranged into four sections, including a brand new section on testing plugins. ( #687 ) Plugins introduces Datasette's plugin system and describes how to install and configure plugins. Writing plugins describes how to author plugins, from one-off single file plugins to packaged plugins that can be published to PyPI. It also describes how to start a plugin using the new datasette-plugin cookiecutter template. Plugin hooks is a full list of detailed documentation for every Datasette plugin hook. Testing plugins describes how to write tests for Datasette plugins, using pytest and HTTPX . | ["Changelog", "0.45 (2020-07-01)"] | [{"href": "https://github.com/simonw/datasette/issues/687", "label": "#687"}, {"href": "https://github.com/simonw/datasette-plugin", "label": "datasette-plugin"}, {"href": "https://docs.pytest.org/", "label": "pytest"}, {"href": "https://www.python-httpx.org/", "label": "HTTPX"}] |
binary_data:binary | binary_data | binary | Binary data | SQLite tables can contain binary data in BLOB columns. Datasette includes special handling for these binary values. The Datasette interface detects binary values and provides a link to download their content, for example on https://latest.datasette.io/fixtures/binary_data Binary data is represented in .json exports using Base64 encoding. https://latest.datasette.io/fixtures/binary_data.json?_shape=array [ { "rowid": 1, "data": { "$base64": true, "encoded": "FRwCx60F/g==" } }, { "rowid": 2, "data": { "$base64": true, "encoded": "FRwDx60F/g==" } }, { "rowid": 3, "data": null } ] | [] | [{"href": "https://latest.datasette.io/fixtures/binary_data", "label": "https://latest.datasette.io/fixtures/binary_data"}, {"href": "https://latest.datasette.io/fixtures/binary_data.json?_shape=array", "label": "https://latest.datasette.io/fixtures/binary_data.json?_shape=array"}] |
changelog:binary-data | changelog | binary-data | Binary data | SQLite tables can contain binary data in BLOB columns. Datasette now provides links for users to download this data directly from Datasette, and uses those links to make binary data available from CSV exports. See Binary data for more details. ( #1036 and #1034 ). | ["Changelog", "0.51 (2020-10-31)"] | [{"href": "https://github.com/simonw/datasette/issues/1036", "label": "#1036"}, {"href": "https://github.com/simonw/datasette/issues/1034", "label": "#1034"}] |
binary_data:binary-linking | binary_data | binary-linking | Linking to binary downloads | The .blob output format is used to return binary data. It requires a _blob_column= query string argument specifying which BLOB column should be downloaded, for example: https://latest.datasette.io/fixtures/binary_data/1.blob?_blob_column=data This output format can also be used to return binary data from an arbitrary SQL query. Since such queries do not specify an exact row, an additional ?_blob_hash= parameter can be used to specify the SHA-256 hash of the value that is being linked to. Consider the query select data from binary_data - demonstrated here . That page links to the binary value downloads. Those links look like this: https://latest.datasette.io/fixtures.blob?sql=select+data+from+binary_data&_blob_column=data&_blob_hash=f3088978da8f9aea479ffc7f631370b968d2e855eeb172bea7f6c7a04262bb6d These .blob links are also returned in the .csv exports Datasette provides for binary tables and queries, since the CSV format does not have a mechanism for representing binary data. | ["Binary data"] | [{"href": "https://latest.datasette.io/fixtures/binary_data/1.blob?_blob_column=data", "label": "https://latest.datasette.io/fixtures/binary_data/1.blob?_blob_column=data"}, {"href": "https://latest.datasette.io/fixtures?sql=select+data+from+binary_data", "label": "demonstrated here"}, {"href": "https://latest.datasette.io/fixtures.blob?sql=select+data+from+binary_data&_blob_column=data&_blob_hash=f3088978da8f9aea479ffc7f631370b968d2e855eeb172bea7f6c7a04262bb6d", "label": "https://latest.datasette.io/fixtures.blob?sql=select+data+from+binary_data&_blob_column=data&_blob_hash=f3088978da8f9aea479ffc7f631370b968d2e855eeb172bea7f6c7a04262bb6d"}] |
binary_data:binary-plugins | binary_data | binary-plugins | Binary plugins | Several Datasette plugins are available that change the way Datasette treats binary data. datasette-render-binary modifies Datasette's default interface to show an automatic guess at what type of binary data is being stored, along with a visual representation of the binary value that displays ASCII strings directly in the interface. datasette-render-images detects common image formats and renders them as images directly in the Datasette interface. datasette-media allows Datasette interfaces to be configured to serve binary files from configured SQL queries, and includes the ability to resize images directly before serving them. | ["Binary data"] | [{"href": "https://github.com/simonw/datasette-render-binary", "label": "datasette-render-binary"}, {"href": "https://github.com/simonw/datasette-render-images", "label": "datasette-render-images"}, {"href": "https://github.com/simonw/datasette-media", "label": "datasette-media"}] |
changelog:bug-fixes | changelog | bug-fixes | Bug fixes | Don't show the facet option in the cog menu if faceting is not allowed. ( #1683 ) ?_sort and ?_sort_desc now work if the column that is being sorted has been excluded from the query using ?_col= or ?_nocol= . ( #1773 ) Fixed bug where ?_sort_desc was duplicated in the URL every time the Apply button was clicked. ( #1738 ) | ["Changelog", "0.62 (2022-08-14)"] | [{"href": "https://github.com/simonw/datasette/issues/1683", "label": "#1683"}, {"href": "https://github.com/simonw/datasette/issues/1773", "label": "#1773"}, {"href": "https://github.com/simonw/datasette/issues/1738", "label": "#1738"}] |
changelog:bug-fixes-and-other-improvements | changelog | bug-fixes-and-other-improvements | Bug fixes and other improvements | Custom pages now work correctly when combined with the base_url setting. ( #1238 ) Fixed intermittent error displaying the index page when the user did not have permission to access one of the tables. Thanks, Guy Freeman. ( #1305 ) Columns with the name "Link" are no longer incorrectly displayed in bold. ( #1308 ) Fixed error caused by tables with a single quote in their names. ( #1257 ) Updated dependencies: pytest-asyncio , Black , jinja2 , aiofiles , click , and itsdangerous . The official Datasette Docker image now supports apt-get install . ( #1320 ) The Heroku runtime used by datasette publish heroku is now python-3.8.10 . | ["Changelog", "0.57 (2021-06-05)"] | [{"href": "https://github.com/simonw/datasette/issues/1238", "label": "#1238"}, {"href": "https://github.com/simonw/datasette/issues/1305", "label": "#1305"}, {"href": "https://github.com/simonw/datasette/issues/1308", "label": "#1308"}, {"href": "https://github.com/simonw/datasette/issues/1257", "label": "#1257"}, {"href": "https://github.com/simonw/datasette/issues/1320", "label": "#1320"}] |
sql_queries:canned-queries-json-api | sql_queries | canned-queries-json-api | JSON API for writable canned queries | Writable canned queries can also be accessed using a JSON API. You can POST data to them using JSON, and you can request that their response is returned to you as JSON. To submit JSON to a writable canned query, encode key/value parameters as a JSON document: POST /mydatabase/add_message {"message": "Message goes here"} You can also continue to submit data using regular form encoding, like so: POST /mydatabase/add_message message=Message+goes+here There are three options for specifying that you would like the response to your request to return JSON data, as opposed to an HTTP redirect to another page. Set an Accept: application/json header on your request Include ?_json=1 in the URL that you POST to Include "_json": 1 in your JSON body, or &_json=1 in your form encoded body The JSON response will look like this: { "ok": true, "message": "Query executed, 1 row affected", "redirect": "/data/add_name" } The "message" and "redirect" values here will take into account on_success_message , on_success_message_sql , on_success_redirect , on_error_message and on_error_redirect , if they have been set. | ["Running SQL queries", "Canned queries"] | [] |
sql_queries:canned-queries-magic-parameters | sql_queries | canned-queries-magic-parameters | Magic parameters | Named parameters that start with an underscore are special: they can be used to automatically add values created by Datasette that are not contained in the incoming form fields or query string. These magic parameters are only supported for canned queries: to avoid security issues (such as queries that extract the user's private cookies) they are not available to SQL that is executed by the user as a custom SQL query. Available magic parameters are: _actor_* - e.g. _actor_id , _actor_name Fields from the currently authenticated Actors . _header_* - e.g. _header_user_agent Header from the incoming HTTP request. The key should be in lower case and with hyphens converted to underscores e.g. _header_user_agent or _header_accept_language . _cookie_* - e.g. _cookie_lang The value of the incoming cookie of that name. _now_epoch The number of seconds since the Unix epoch. _now_date_utc The date in UTC, e.g. 2020-06-01 _now_datetime_utc The ISO 8601 datetime in UTC, e.g. 2020-06-24T18:01:07Z _random_chars_* - e.g. … | ["Running SQL queries", "Canned queries"] | [] |
sql_queries:canned-queries-named-parameters | sql_queries | canned-queries-named-parameters | Canned query parameters | Canned queries support named parameters, so if you include those in the SQL you will then be able to enter them using the form fields on the canned query page or by adding them to the URL. This means canned queries can be used to create custom JSON APIs based on a carefully designed SQL statement. Here's an example of a canned query with a named parameter: select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood; In the canned query configuration looks like this: [[[cog config_example(cog, """ databases: fixtures: queries: neighborhood_search: title: Search neighborhoods sql: |- select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like '%' || :text || '%' order by neighborhood """) ]]] [[[end]]] Note that we are using SQLite string concatenation here - the || operator - to add wildcard % characters to the string provided by the user. You can try this canned query out here: https://latest.datasette.io/fixtures/neighborhood_search?text=town In this example the :text named parameter is automatically extracted from the query using a regular expression. You can alternatively provide an explicit list of named parameters using the "params" key, like this: [[[cog config_example(cog, """ databases: fixtures: queries: neighborhood_search: title: Search neighborhoods params: - text sql: |- select neighborhood, facet_cities.name, state from facetable join facet_cities on facetable.city_id = facet_cities.id where neighborhood like … | ["Running SQL queries", "Canned queries"] | [{"href": "https://latest.datasette.io/fixtures/neighborhood_search?text=town", "label": "https://latest.datasette.io/fixtures/neighborhood_search?text=town"}] |
sql_queries:canned-queries-options | sql_queries | canned-queries-options | Additional canned query options | Additional options can be specified for canned queries in the YAML or JSON configuration. | ["Running SQL queries", "Canned queries"] | [] |
sql_queries:canned-queries-writable | sql_queries | canned-queries-writable | Writable canned queries | Canned queries by default are read-only. You can use the "write": true key to indicate that a canned query can write to the database. See Access to specific canned queries for details on how to add permission checks to canned queries, using the "allow" key. [[[cog config_example(cog, { "databases": { "mydatabase": { "queries": { "add_name": { "sql": "INSERT INTO names (name) VALUES (:name)", "write": True } } } } }) ]]] [[[end]]] This configuration will create a page at /mydatabase/add_name displaying a form with a name field. Submitting that form will execute the configured INSERT query. You can customize how Datasette represents success and errors using the following optional properties: on_success_message - the message shown when a query is successful on_success_message_sql - alternative to on_success_message : a SQL query that should be executed to generate the message on_success_redirect - the path or URL the user is redirected to on success on_error_message - the message shown when a query throws an error on_error_redirect - the path or URL the user is redirected to on error For example: [[[cog config_example(cog, { "databases": { "mydatabase": { "queries": { "add_name": { "sql": "INSERT INTO names (name) VALUES (:name)", "params": ["name"], "write": Tru… | ["Running SQL queries", "Canned queries"] | [] |
cli-reference:cli-datasette-get | cli-reference | cli-datasette-get | datasette --get | The --get option to datasette serve (or just datasette ) specifies the path to a page within Datasette and causes Datasette to output the content from that path without starting the web server. This means that all of Datasette's functionality can be accessed directly from the command-line. For example: datasette --get '/-/versions.json' | jq . { "python": { "version": "3.8.5", "full": "3.8.5 (default, Jul 21 2020, 10:48:26) \n[Clang 11.0.3 (clang-1103.0.32.62)]" }, "datasette": { "version": "0.46+15.g222a84a.dirty" }, "asgi": "3.0", "uvicorn": "0.11.8", "sqlite": { "version": "3.32.3", "fts_versions": [ "FTS5", "FTS4", "FTS3" ], "extensions": { "json1": null }, "compile_options": [ "COMPILER=clang-11.0.3", "ENABLE_COLUMN_METADATA", "ENABLE_FTS3", "ENABLE_FTS3_PARENTHESIS", "ENABLE_FTS4", "ENABLE_FTS5", "ENABLE_GEOPOLY", "ENABLE_JSON1", "ENABLE_PREUPDATE_HOOK", "ENABLE_RTREE", "ENABLE_SESSION", "MAX_VARIABLE_NUMBER=250000", "THREADSAFE=1" ] } } You can use the --token TOKEN option to send an API token with the simulated request. Or you can make a request as a specific actor by passing a JSON representation of that actor to --actor : datasette --memory --actor '{"id": "root"}' --get '/-/actor.json' The exit code of datasette --get will be 0 if the request succeeds and 1 if the request produced an HTTP status code other than 200 - e.g. a 404 or 500 error. This lets you use datasette --get / to run tests against a Datasette application in a continuous integration environment such as GitHub Actions. | ["CLI reference", "datasette serve"] | [] |
cli-reference:cli-datasette-serve-env | cli-reference | cli-datasette-serve-env | Environment variables | Some of the datasette serve options can be provided by environment variables: DATASETTE_SECRET : Equivalent to the --secret option. DATASETTE_SSL_KEYFILE : Equivalent to the --ssl-keyfile option. DATASETTE_SSL_CERTFILE : Equivalent to the --ssl-certfile option. DATASETTE_LOAD_EXTENSION : Equivalent to the --load-extension option. | ["CLI reference", "datasette serve"] | [] |
cli-reference:cli-help-create-token-help | cli-reference | cli-help-create-token-help | datasette create-token | Create a signed API token, see datasette create-token . [[[cog help(["create-token", "--help"]) ]]] Usage: datasette create-token [OPTIONS] ID Create a signed API token for the specified actor ID Example: datasette create-token root --secret mysecret To allow only "view-database-download" for all databases: datasette create-token root --secret mysecret \ --all view-database-download To allow "create-table" against a specific database: datasette create-token root --secret mysecret \ --database mydb create-table To allow "insert-row" against a specific table: datasette create-token root --secret myscret \ --resource mydb mytable insert-row Restricted actions can be specified multiple times using multiple --all, --database, and --resource options. Add --debug to see a decoded version of the token. Options: --secret TEXT Secret used for signing the API tokens [required] -e, --expires-after INTEGER Token should expire after this many seconds -a, --all ACTION Restrict token to this action -d, --database DB ACTION Restrict token to this action on this database -r, --resource DB RESOURCE ACTION Restrict token to this action on this database resource (a table, SQL view or named query) --debug Show decoded token --plugins-dir DIRECTORY Path to directory containing custom plugins --help Show this message and exit. [[[end]]] | ["CLI reference"] | [] |
cli-reference:cli-help-help | cli-reference | cli-help-help | datasette --help | Running datasette --help shows a list of all of the available commands. [[[cog help(["--help"]) ]]] Usage: datasette [OPTIONS] COMMAND [ARGS]... Datasette is an open source multi-tool for exploring and publishing data About Datasette: https://datasette.io/ Full documentation: https://docs.datasette.io/ Options: --version Show the version and exit. --help Show this message and exit. Commands: serve* Serve up specified SQLite database files with a web UI create-token Create a signed API token for the specified actor ID inspect Generate JSON summary of provided database files install Install plugins and packages from PyPI into the same... package Package SQLite files into a Datasette Docker container plugins List currently installed plugins publish Publish specified SQLite database files to the internet... uninstall Uninstall plugins and Python packages from the Datasette... [[[end]]] Additional commands added by plugins that use the register_commands(cli) hook will be listed here as well. | ["CLI reference"] | [] |
cli-reference:cli-help-inspect-help | cli-reference | cli-help-inspect-help | datasette inspect | Outputs JSON representing introspected data about one or more SQLite database files. If you are opening an immutable database, you can pass this file to the --inspect-data option to improve Datasette's performance by allowing it to skip running row counts against the database when it first starts running: datasette inspect mydatabase.db > inspect-data.json datasette serve -i mydatabase.db --inspect-file inspect-data.json This performance optimization is used automatically by some of the datasette publish commands. You are unlikely to need to apply this optimization manually. [[[cog help(["inspect", "--help"]) ]]] Usage: datasette inspect [OPTIONS] [FILES]... Generate JSON summary of provided database files This can then be passed to "datasette --inspect-file" to speed up count operations against immutable database files. Options: --inspect-file TEXT --load-extension PATH:ENTRYPOINT? Path to a SQLite extension to load, and optional entrypoint --help Show this message and exit. [[[end]]] | ["CLI reference"] | [] |
cli-reference:cli-help-install-help | cli-reference | cli-help-install-help | datasette install | Install new Datasette plugins. This command works like pip install but ensures that your plugins will be installed into the same environment as Datasette. This command: datasette install datasette-cluster-map Would install the datasette-cluster-map plugin. [[[cog help(["install", "--help"]) ]]] Usage: datasette install [OPTIONS] [PACKAGES]... Install plugins and packages from PyPI into the same environment as Datasette Options: -U, --upgrade Upgrade packages to latest version -r, --requirement PATH Install from requirements file -e, --editable TEXT Install a project in editable mode from this path --help Show this message and exit. [[[end]]] | ["CLI reference"] | [{"href": "https://datasette.io/plugins/datasette-cluster-map", "label": "datasette-cluster-map"}] |
cli-reference:cli-help-package-help | cli-reference | cli-help-package-help | datasette package | Package SQLite files into a Datasette Docker container, see datasette package . [[[cog help(["package", "--help"]) ]]] Usage: datasette package [OPTIONS] FILES... Package SQLite files into a Datasette Docker container Options: -t, --tag TEXT Name for the resulting Docker container, can optionally use name:tag format -m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish --extra-options TEXT Extra options to pass to datasette serve --branch TEXT Install datasette from a GitHub branch e.g. main --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --install TEXT Additional packages (e.g. plugins) to install --spatialite Enable SpatialLite extension --version-note TEXT Additional note to show on /-/versions --secret TEXT Secret used for signing secure values, such as signed cookies -p, --port INTEGER RANGE Port to run the server on, defaults to 8001 [1<=x<=65535] --title TEXT Title for metadata --license TEXT License label for metadata --license_url TEXT License URL for metadata --source TEXT Source label for metadata --source_url TEXT Source URL for metadata --about TEXT About label for metadata --about_url TEXT About URL for metadata --help Show this message and exit. [[[end]]] | ["CLI reference"] | [] |
cli-reference:cli-help-plugins-help | cli-reference | cli-help-plugins-help | datasette plugins | Output JSON showing all currently installed plugins, their versions, whether they include static files or templates and which Plugin hooks they use. [[[cog help(["plugins", "--help"]) ]]] Usage: datasette plugins [OPTIONS] List currently installed plugins Options: --all Include built-in default plugins --requirements Output requirements.txt of installed plugins --plugins-dir DIRECTORY Path to directory containing custom plugins --help Show this message and exit. [[[end]]] Example output: [ { "name": "datasette-geojson", "static": false, "templates": false, "version": "0.3.1", "hooks": [ "register_output_renderer" ] }, { "name": "datasette-geojson-map", "static": true, "templates": false, "version": "0.4.0", "hooks": [ "extra_body_script", "extra_css_urls", "extra_js_urls" ] }, { "name": "datasette-leaflet", "static": true, "templates": false, "version": "0.2.2", "hooks": [ "extra_body_script", "extra_template_vars" ] } ] | ["CLI reference"] | [] |
cli-reference:cli-help-publish-cloudrun-help | cli-reference | cli-help-publish-cloudrun-help | datasette publish cloudrun | See Publishing to Google Cloud Run . [[[cog help(["publish", "cloudrun", "--help"]) ]]] Usage: datasette publish cloudrun [OPTIONS] [FILES]... Publish databases to Datasette running on Cloud Run Options: -m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish --extra-options TEXT Extra options to pass to datasette serve --branch TEXT Install datasette from a GitHub branch e.g. main --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --install TEXT Additional packages (e.g. plugins) to install --plugin-secret <TEXT TEXT TEXT>... Secrets to pass to plugins, e.g. --plugin- secret datasette-auth-github client_id xxx --version-note TEXT Additional note to show on /-/versions --secret TEXT Secret used for signing secure values, such as signed cookies --title TEXT Title for metadata --license TEXT License label for metadata --license_url TEXT License URL for metadata --source TEXT Source label for metadata --source_url TEXT Source URL for metadata --about TEXT About label for metadata --about_url TEXT About URL for metadata -n, --name TEXT Application name to use when building --service TEXT Cloud Run service to deploy (or over-write) --spatialite Enable SpatialLite extension --show-files Output the generated Dockerfile and metad… | ["CLI reference"] | [] |
cli-reference:cli-help-publish-help | cli-reference | cli-help-publish-help | datasette publish | Shows a list of available deployment targets for publishing data with Datasette. Additional deployment targets can be added by plugins that use the publish_subcommand(publish) hook. [[[cog help(["publish", "--help"]) ]]] Usage: datasette publish [OPTIONS] COMMAND [ARGS]... Publish specified SQLite database files to the internet along with a Datasette-powered interface and API Options: --help Show this message and exit. Commands: cloudrun Publish databases to Datasette running on Cloud Run heroku Publish databases to Datasette running on Heroku [[[end]]] | ["CLI reference"] | [] |
cli-reference:cli-help-publish-heroku-help | cli-reference | cli-help-publish-heroku-help | datasette publish heroku | See Publishing to Heroku . [[[cog help(["publish", "heroku", "--help"]) ]]] Usage: datasette publish heroku [OPTIONS] [FILES]... Publish databases to Datasette running on Heroku Options: -m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish --extra-options TEXT Extra options to pass to datasette serve --branch TEXT Install datasette from a GitHub branch e.g. main --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --install TEXT Additional packages (e.g. plugins) to install --plugin-secret <TEXT TEXT TEXT>... Secrets to pass to plugins, e.g. --plugin- secret datasette-auth-github client_id xxx --version-note TEXT Additional note to show on /-/versions --secret TEXT Secret used for signing secure values, such as signed cookies --title TEXT Title for metadata --license TEXT License label for metadata --license_url TEXT License URL for metadata --source TEXT Source label for metadata --source_url TEXT Source URL for metadata --about TEXT About label for metadata --about_url TEXT About URL for metadata -n, --name TEXT Application name to use when deploying --tar TEXT --tar option to pass to Heroku, e.g. --tar=/usr/local/bin/gtar --generate-dir DIRECTORY Output generated application files and stop without deploying --h… | ["CLI reference"] | [] |
cli-reference:cli-help-serve-help | cli-reference | cli-help-serve-help | datasette serve | This command starts the Datasette web application running on your machine: datasette serve mydatabase.db Or since this is the default command you can run this instead: datasette mydatabase.db Once started you can access it at http://localhost:8001 [[[cog help(["serve", "--help"]) ]]] Usage: datasette serve [OPTIONS] [FILES]... Serve up specified SQLite database files with a web UI Options: -i, --immutable PATH Database files to open in immutable mode -h, --host TEXT Host for server. Defaults to 127.0.0.1 which means only connections from the local machine will be allowed. Use 0.0.0.0 to listen to all IPs and allow access from other machines. -p, --port INTEGER RANGE Port for server, defaults to 8001. Use -p 0 to automatically assign an available port. [0<=x<=65535] --uds TEXT Bind to a Unix domain socket --reload Automatically reload if code or metadata change detected - useful for development --cors Enable CORS by serving Access-Control-Allow- Origin: * --load-extension PATH:ENTRYPOINT? Path to a SQLite extension to load, and optional entrypoint --inspect-file TEXT Path to JSON file created using "datasette inspect" -m, --metadata FILENAME Path to JSON/YAML file containing license/source metadata --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files fr… | ["CLI reference"] | [] |
cli-reference:cli-help-serve-help-settings | cli-reference | cli-help-serve-help-settings | datasette serve --help-settings | This command outputs all of the available Datasette settings . These can be passed to datasette serve using datasette serve --setting name value . [[[cog help(["--help-settings"]) ]]] Settings: default_page_size Default page size for the table view (default=100) max_returned_rows Maximum rows that can be returned from a table or custom query (default=1000) max_insert_rows Maximum rows that can be inserted at a time using the bulk insert API (default=100) num_sql_threads Number of threads in the thread pool for executing SQLite queries (default=3) sql_time_limit_ms Time limit for a SQL query in milliseconds (default=1000) default_facet_size Number of values to return for requested facets (default=30) facet_time_limit_ms Time limit for calculating a requested facet (default=200) facet_suggest_time_limit_ms Time limit for calculating a suggested facet (default=50) allow_facet Allow users to specify columns to facet using ?_facet= parameter (default=True) allow_download Allow users to download the original SQLite database files (default=True) allow_signed_tokens Allow users to create and use signed API tokens (default=True) default_allow_sql Allow anyone to run arbitrary SQL queries (default=True) max_signed_tokens_ttl Maximum allowed expiry time for signed API tokens (default=0) suggest_facets Calculate and display suggested facets (default… | ["CLI reference", "datasette serve"] | [] |
cli-reference:cli-help-uninstall-help | cli-reference | cli-help-uninstall-help | datasette uninstall | Uninstall one or more plugins. [[[cog help(["uninstall", "--help"]) ]]] Usage: datasette uninstall [OPTIONS] PACKAGES... Uninstall plugins and Python packages from the Datasette environment Options: -y, --yes Don't ask for confirmation --help Show this message and exit. [[[end]]] | ["CLI reference"] | [] |
publish:cli-package | publish | cli-package | datasette package | If you have docker installed (e.g. using Docker for Mac ) you can use the datasette package command to create a new Docker image in your local repository containing the datasette app bundled together with one or more SQLite databases: datasette package mydatabase.db Here's example output for the package command: datasette package parlgov.db --extra-options="--setting sql_time_limit_ms 2500" Sending build context to Docker daemon 4.459MB Step 1/7 : FROM python:3.11.0-slim-bullseye ---> 79e1dc9af1c1 Step 2/7 : COPY . /app ---> Using cache ---> cd4ec67de656 Step 3/7 : WORKDIR /app ---> Using cache ---> 139699e91621 Step 4/7 : RUN pip install datasette ---> Using cache ---> 340efa82bfd7 Step 5/7 : RUN datasette inspect parlgov.db --inspect-file inspect-data.json ---> Using cache ---> 5fddbe990314 Step 6/7 : EXPOSE 8001 ---> Using cache ---> 8e83844b0fed Step 7/7 : CMD datasette serve parlgov.db --port 8001 --inspect-file inspect-data.json --setting sql_time_limit_ms 2500 ---> Using cache ---> 1bd380ea8af3 Successfully built 1bd380ea8af3 You can now run the resulting container like so: docker run -p 8081:8001 1bd380ea8af3 This exposes port 8001 inside the container as port 8081 on your host machine, so you can access the application at http://localhost:8081/ You can customize the port that is exposed by the container using the --port option: datasette package mydatabase.db --port 8080 A full list of options can be seen by running datasette package --help : See datasette package for the full list of options for this command. | ["Publishing data"] | [{"href": "https://www.docker.com/docker-mac", "label": "Docker for Mac"}] |
publish:cli-publish | publish | cli-publish | datasette publish | Once you have created a SQLite database (e.g. using csvs-to-sqlite ) you can deploy it to a hosting account using a single command. You will need a hosting account with Heroku or Google Cloud . Once you have created your account you will need to install and configure the heroku or gcloud command-line tools. | ["Publishing data"] | [{"href": "https://github.com/simonw/csvs-to-sqlite/", "label": "csvs-to-sqlite"}, {"href": "https://www.heroku.com/", "label": "Heroku"}, {"href": "https://cloud.google.com/", "label": "Google Cloud"}] |
changelog:code-formatting-with-black-and-prettier | changelog | code-formatting-with-black-and-prettier | Code formatting with Black and Prettier | Datasette adopted Black for opinionated Python code formatting in June 2019. Datasette now also embraces Prettier for JavaScript formatting, which like Black is enforced by tests in continuous integration. Instructions for using these two tools can be found in the new section on Code formatting in the contributors documentation. ( #1167 ) | ["Changelog", "0.54 (2021-01-25)"] | [{"href": "https://github.com/psf/black", "label": "Black"}, {"href": "https://prettier.io/", "label": "Prettier"}, {"href": "https://github.com/simonw/datasette/issues/1167", "label": "#1167"}] |
json_api:column-filter-arguments | json_api | column-filter-arguments | Column filter arguments | You can filter the data returned by the table based on column values using a query string argument. ?column__exact=value or ?_column=value Returns rows where the specified column exactly matches the value. ?column__not=value Returns rows where the column does not match the value. ?column__contains=value Rows where the string column contains the specified value ( column like "%value%" in SQL). ?column__notcontains=value Rows where the string column does not contain the specified value ( column not like "%value%" in SQL). ?column__endswith=value Rows where the string column ends with the specified value ( column like "%value" in SQL). ?column__startswith=value Rows where the string column starts with the specified value ( column like "value%" in SQL). ?column__gt=value Rows which are greater than the specified value. ?column__gte=value Rows which… | ["JSON API", "Table arguments"] | [] |
settings:config-dir | settings | config-dir | Configuration directory mode | Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose: datasette one.db two.db \ --metadata=metadata.json \ --template-dir=templates/ \ --plugins-dir=plugins \ --static css:css As an alternative to this, you can run Datasette in configuration directory mode. Create a directory with the following structure: # In a directory called my-app: my-app/one.db my-app/two.db my-app/datasette.yaml my-app/metadata.json my-app/templates/index.html my-app/plugins/my_plugin.py my-app/static/my.css Now start Datasette by providing the path to that directory: datasette my-app/ Datasette will detect the files in that directory and automatically configure itself using them. It will serve all *.db files that it finds, will load metadata.json if it exists, and will load the templates , plugins and static folders if they are present. The files that can be included in this directory are as follows. All are optional. *.db (or *.sqlite3 or *.sqlite ) - SQLite database files that will be served by Datasette datasette.yaml - Configuration for the Datasette instance metadata.json - Metadata for those databases - metadata.yaml or metadata.yml can be used as well inspect-data.json - the result of running datasette inspect *.db --inspect-file=inspect-data.json from the configuration directory - any database files listed here will be treated as immutable, so they should not be changed while Datasette is running templates/ - a directory containing Custom templates … | ["Settings"] | [] |
changelog:configuration | changelog | configuration | Configuration | Plugin configuration now lives in the datasette.yaml configuration file , passed to Datasette using the -c/--config option. Thanks, Alex Garcia. ( #2093 ) datasette -c datasette.yaml Where datasette.yaml contains configuration that looks like this: plugins: datasette-cluster-map: latitude_column: xlat longitude_column: xlon Previously plugins were configured in metadata.yaml , which was confusing as plugin settings were unrelated to database and table metadata. The -s/--setting option can now be used to set plugin configuration as well. See Configuration via the command-line for details. ( #2252 ) The above YAML configuration example using -s/--setting looks like this: datasette mydatabase.db \ -s plugins.datasette-cluster-map.latitude_column xlat \ -s plugins.datasette-cluster-map.longitude_column xlon The new /-/config page shows the current instance configuration, after redacting keys that could contain sensitive data such as API keys or passwords. ( #2254 ) Existing Datasette installations may already have configuration set in metadata.yaml that should be migrated to datasette.yaml . To avoid breaking these installations, Datasette will silently treat table configuration, plugin configuration and allow blocks in metadata as if they had been specified in configuration instead. ( #2247 ) ( #2248 ) ( #2249 ) Note that the datasette publish command has not yet been updated to accept a datasette.yaml configuration file. This will be addressed in #2195 but for the moment you can include those settings in metadata.yaml instead. | ["Changelog", "1.0a8 (2024-02-07)"] | [{"href": "https://github.com/simonw/datasette/issues/2093", "label": "#2093"}, {"href": "https://github.com/simonw/datasette/issues/2252", "label": "#2252"}, {"href": "https://github.com/simonw/datasette/issues/2254", "label": "#2254"}, {"href": "https://github.com/simonw/datasette/issues/2247", "label": "#2247"}, {"href": "https://github.com/simonw/datasette/issues/2248", "label": "#2248"}, {"href": "https://github.com/simonw/datasette/issues/2249", "label": "#2249"}, {"href": "https://github.com/simonw/datasette/issues/2195", "label": "#2195"}] |
configuration:configuration-cli | configuration | configuration-cli | Configuration via the command-line | The recommended way to configure Datasette is using a datasette.yaml file passed to -c/--config . You can also pass individual settings to Datasette using the -s/--setting option, which can be used multiple times: datasette mydatabase.db \ --setting settings.default_page_size 50 \ --setting settings.sql_time_limit_ms 3500 This option takes dotted-notation for the first argument and a value for the second argument. This means you can use it to set any configuration value that would be valid in a datasette.yaml file. It also works for plugin configuration, for example for datasette-cluster-map : datasette mydatabase.db \ --setting plugins.datasette-cluster-map.latitude_column xlat \ --setting plugins.datasette-cluster-map.longitude_column xlon If the value you provide is a valid JSON object or list it will be treated as nested data, allowing you to configure plugins that accept lists such as datasette-proxy-url : datasette mydatabase.db \ -s plugins.datasette-proxy-url.paths '[{"path": "/proxy", "backend": "http://example.com/"}]' This is equivalent to a datasette.yaml file containing the following: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ plugins: datasette-proxy-url: paths: - path: /proxy backend: http://example.com/ """).strip() ) ]]] [[[end]]] | ["Configuration"] | [{"href": "https://datasette.io/plugins/datasette-cluster-map", "label": "datasette-cluster-map"}, {"href": "https://datasette.io/plugins/datasette-proxy-url", "label": "datasette-proxy-url"}] |
configuration:configuration-reference | configuration | configuration-reference | The following example shows some of the valid configuration options that can exist inside datasette.yaml . [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # Datasette settings block settings: default_page_size: 50 sql_time_limit_ms: 3500 max_returned_rows: 2000 # top-level plugin configuration plugins: datasette-my-plugin: key: valueA # Database and table-level configuration databases: your_db_name: # plugin configuration for the your_db_name database plugins: datasette-my-plugin: key: valueA tables: your_table_name: allow: # Only the root user can access this table id: root # plugin configuration for the your_table_name table # inside your_db_name database plugins: datasette-my-plugin: key: valueB """) ) ]]] [[[end]]] | ["Configuration"] | [] | |
configuration:configuration-reference-canned-queries | configuration | configuration-reference-canned-queries | Canned queries configuration | Canned queries are named SQL queries that appear in the Datasette interface. They can be configured in datasette.yaml using the queries key at the database level: [[[cog from metadata_doc import config_example, config_example config_example(cog, { "databases": { "sf-trees": { "queries": { "just_species": { "sql": "select qSpecies from Street_Tree_List" } } } } }) ]]] [[[end]]] See the canned queries documentation for more, including how to configure writable canned queries . | ["Configuration", null] | [] |
configuration:configuration-reference-css-js | configuration | configuration-reference-css-js | Custom CSS and JavaScript | Datasette can load additional CSS and JavaScript files, configured in datasette.yaml like this: [[[cog from metadata_doc import config_example config_example(cog, """ extra_css_urls: - https://simonwillison.net/static/css/all.bf8cd891642c.css extra_js_urls: - https://code.jquery.com/jquery-3.2.1.slim.min.js """) ]]] [[[end]]] The extra CSS and JavaScript files will be linked in the <head> of every page: <link rel="stylesheet" href="https://simonwillison.net/static/css/all.bf8cd891642c.css"> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js"></script> You can also specify a SRI (subresource integrity hash) for these assets: [[[cog config_example(cog, """ extra_css_urls: - url: https://simonwillison.net/static/css/all.bf8cd891642c.css sri: sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI extra_js_urls: - url: https://code.jquery.com/jquery-3.2.1.slim.min.js sri: sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g= """) ]]] [[[end]]] This will produce: <link rel="stylesheet" href="https://simonwillison.net/static/css/all.bf8cd891642c.css" integrity="sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI" crossorigin="anonymous"> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=" crossorigin="anonymous"></script> Modern browsers will only execute the stylesheet or JavaScript if the SRI hash matches the content served. You can generate hashes using www.srihash.org Items in "extra_js_urls" can specify "module": true if they reference JavaScript that uses JavaScript modules . This configuration: [[[cog config_example(cog, """ extra_js_urls: - url: http… | ["Configuration", null] | [{"href": "https://www.srihash.org/", "label": "www.srihash.org"}, {"href": "https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules", "label": "JavaScript modules"}] |
configuration:configuration-reference-permissions | configuration | configuration-reference-permissions | Permissions configuration | Datasette's authentication and permissions system can also be configured using datasette.yaml . Here is a simple example: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # Instance is only available to users 'sharon' and 'percy': allow: id: - sharon - percy # Only 'percy' is allowed access to the accounting database: databases: accounting: allow: id: percy """).strip() ) ]]] [[[end]]] Access permissions in datasette.yaml has the full details. | ["Configuration", null] | [] |
configuration:configuration-reference-plugins | configuration | configuration-reference-plugins | Plugin configuration | Datasette plugins often require configuration. This plugin configuration should be placed in plugins keys inside datasette.yaml . Most plugins are configured at the top-level of the file, using the plugins key: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # inside datasette.yaml plugins: datasette-my-plugin: key: my_value """).strip() ) ]]] [[[end]]] Some plugins can be configured at the database or table level. These should use a plugins key nested under the appropriate place within the databases object: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # inside datasette.yaml databases: my_database: # plugin configuration for the my_database database plugins: datasette-my-plugin: key: my_value my_other_database: tables: my_table: # plugin configuration for the my_table table inside the my_other_database database plugins: datasette-my-plugin: key: my_value """).strip() ) ]]] [[[end]]] | ["Configuration", null] | [] |
configuration:configuration-reference-settings | configuration | configuration-reference-settings | Settings | Settings can be configured in datasette.yaml with the settings key: [[[cog from metadata_doc import config_example import textwrap config_example(cog, textwrap.dedent( """ # inside datasette.yaml settings: default_allow_sql: off default_page_size: 50 """).strip() ) ]]] [[[end]]] The full list of settings is available in the settings documentation . Settings can also be passed to Datasette using one or more --setting name value command line options.` | ["Configuration", null] | [] |
full_text_search:configuring-fts-by-hand | full_text_search | configuring-fts-by-hand | Configuring FTS by hand | We recommend using sqlite-utils , but if you want to hand-roll a SQLite full-text search table you can do so using the following SQL. To enable full-text search for a table called items that works against the name and description columns, you would run this SQL to create a new items_fts FTS virtual table: CREATE VIRTUAL TABLE "items_fts" USING FTS4 ( name, description, content="items" ); This creates a set of tables to power full-text search against items . The new items_fts table will be detected by Datasette as the fts_table for the items table. Creating the table is not enough: you also need to populate it with a copy of the data that you wish to make searchable. You can do that using the following SQL: INSERT INTO "items_fts" (rowid, name, description) SELECT rowid, name, description FROM items; If your table has columns that are foreign key references to other tables you can include that data in your full-text search index using a join. Imagine the items table has a foreign key column called category_id which refers to a categories table - you could create a full-text search table like this: CREATE VIRTUAL TABLE "items_fts" USING FTS4 ( name, description, category_name, content="items" ); And then populate it like this: INSERT INTO "items_fts" (rowid, name, description, category_name) SELECT items.rowid, items.name, items.description, categories.name FROM items JOIN categories ON items.category_id=categories.id; You can use this technique to populate the full-text search index from any combination of tables and joins that makes sense for your project. | ["Full-text search", "Enabling full-text search for a SQLite table"] | [{"href": "https://sqlite-utils.datasette.io/", "label": "sqlite-utils"}] |
full_text_search:configuring-fts-using-csvs-to-sqlite | full_text_search | configuring-fts-using-csvs-to-sqlite | Configuring FTS using csvs-to-sqlite | If your data starts out in CSV files, you can use Datasette's companion tool csvs-to-sqlite to convert that file into a SQLite database and enable full-text search on specific columns. For a file called items.csv where you want full-text search to operate against the name and description columns you would run the following: csvs-to-sqlite items.csv items.db -f name -f description | ["Full-text search", "Enabling full-text search for a SQLite table"] | [{"href": "https://github.com/simonw/csvs-to-sqlite", "label": "csvs-to-sqlite"}] |
full_text_search:configuring-fts-using-sqlite-utils | full_text_search | configuring-fts-using-sqlite-utils | Configuring FTS using sqlite-utils | sqlite-utils is a CLI utility and Python library for manipulating SQLite databases. You can use it from Python code to configure FTS search, or you can achieve the same goal using the accompanying command-line tool . Here's how to use sqlite-utils to enable full-text search for an items table across the name and description columns: sqlite-utils enable-fts mydatabase.db items name description | ["Full-text search", "Enabling full-text search for a SQLite table"] | [{"href": "https://sqlite-utils.datasette.io/", "label": "sqlite-utils"}, {"href": "https://sqlite-utils.datasette.io/en/latest/python-api.html#enabling-full-text-search", "label": "it from Python code"}, {"href": "https://sqlite-utils.datasette.io/en/latest/cli.html#configuring-full-text-search", "label": "using the accompanying command-line tool"}] |
index:contents | index | contents | Contents | Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions Configuration Configuration via the command-line datasette.yaml reference Settings Plugin configuration Permissions configuration Canned queries configuration Custom CSS and JavaScript The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve Environment variables datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect datasette create-token Pages and API endpoints Top-level index Database Hidden tables Queries Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Default representation Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Enabling CORS The JSON write API Inserting rows Upserting rows Updating a row Deleting a row Creating a table Creating a table from example data Dropping tables Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the "root" actor Permissions How permissions… | ["Datasette"] | [] |
contributing:contributing-alpha-beta | contributing | contributing-alpha-beta | Alpha and beta releases | Alpha and beta releases are published to preview upcoming features that may not yet be stable - in particular to preview new plugin hooks. You are welcome to try these out, but please be aware that details may change before the final release. Please join discussions on the issue tracker to share your thoughts and experiences with on alpha and beta features that you try out. | ["Contributing"] | [{"href": "https://github.com/simonw/datasette/issues", "label": "discussions on the issue tracker"}] |
contributing:contributing-bug-fix-branch | contributing | contributing-bug-fix-branch | Releasing bug fixes from a branch | If it's necessary to publish a bug fix release without shipping new features that have landed on main a release branch can be used. Create it from the relevant last tagged release like so: git branch 0.52.x 0.52.4 git checkout 0.52.x Next cherry-pick the commits containing the bug fixes: git cherry-pick COMMIT Write the release notes in the branch, and update the version number in version.py . Then push the branch: git push -u origin 0.52.x Once the tests have completed, publish the release from that branch target using the GitHub Draft a new release form. Finally, cherry-pick the commit with the release notes and version number bump across to main : git checkout main git cherry-pick COMMIT git push | ["Contributing"] | [{"href": "https://github.com/simonw/datasette/releases/new", "label": "Draft a new release"}] |
contributing:contributing-continuous-deployment | contributing | contributing-continuous-deployment | Continuously deployed demo instances | The demo instance at latest.datasette.io is re-deployed automatically to Google Cloud Run for every push to main that passes the test suite. This is implemented by the GitHub Actions workflow at .github/workflows/deploy-latest.yml . Specific branches can also be set to automatically deploy by adding them to the on: push: branches block at the top of the workflow YAML file. Branches configured in this way will be deployed to a new Cloud Run service whether or not their tests pass. The Cloud Run URL for a branch demo can be found in the GitHub Actions logs. | ["Contributing"] | [{"href": "https://latest.datasette.io/", "label": "latest.datasette.io"}, {"href": "https://github.com/simonw/datasette/blob/main/.github/workflows/deploy-latest.yml", "label": ".github/workflows/deploy-latest.yml"}] |
contributing:contributing-debugging | contributing | contributing-debugging | Debugging | Any errors that occur while Datasette is running while display a stack trace on the console. You can tell Datasette to open an interactive pdb (or ipdb , if present) debugger session if an error occurs using the --pdb option: datasette --pdb fixtures.db For ipdb , first run this: datasette install ipdb | ["Contributing"] | [{"href": "https://pypi.org/project/ipdb/", "label": "ipdb"}] |
contributing:contributing-documentation | contributing | contributing-documentation | Editing and building the documentation | Datasette's documentation lives in the docs/ directory and is deployed automatically using Read The Docs . The documentation is written using reStructuredText. You may find this article on The subset of reStructuredText worth committing to memory useful. You can build it locally by installing sphinx and sphinx_rtd_theme in your Datasette development environment and then running make html directly in the docs/ directory: # You may first need to activate your virtual environment: source venv/bin/activate # Install the dependencies needed to build the docs pip install -e .[docs] # Now build the docs cd docs/ make html This will create the HTML version of the documentation in docs/_build/html . You can open it in your browser like so: open _build/html/index.html Any time you make changes to a .rst file you can re-run make html to update the built documents, then refresh them in your browser. For added productivity, you can use use sphinx-autobuild to run Sphinx in auto-build mode. This will run a local webserver serving the docs that automatically rebuilds them and refreshes the page any time you hit save in your editor. sphinx-autobuild will have been installed when you ran pip install -e .[docs] . In your docs/ directory you can start the server by running the following: make livehtml Now browse to http://localhost:8000/ to view the documentation. Any edits you make should be instantly reflected in your browser. | ["Contributing"] | [{"href": "https://readthedocs.org/", "label": "Read The Docs"}, {"href": "https://simonwillison.net/2018/Aug/25/restructuredtext/", "label": "The subset of reStructuredText worth committing to memory"}, {"href": "https://pypi.org/project/sphinx-autobuild/", "label": "sphinx-autobuild"}] |
contributing:contributing-documentation-cog | contributing | contributing-documentation-cog | Running Cog | Some pages of documentation (in particular the CLI reference ) are automatically updated using Cog . To update these pages, run the following command: cog -r docs/*.rst | ["Contributing", "Editing and building the documentation"] | [{"href": "https://github.com/nedbat/cog", "label": "Cog"}] |
contributing:contributing-formatting | contributing | contributing-formatting | Code formatting | Datasette uses opinionated code formatters: Black for Python and Prettier for JavaScript. These formatters are enforced by Datasette's continuous integration: if a commit includes Python or JavaScript code that does not match the style enforced by those tools, the tests will fail. When developing locally, you can verify and correct the formatting of your code using these tools. | ["Contributing"] | [{"href": "https://github.com/psf/black", "label": "Black"}, {"href": "https://prettier.io/", "label": "Prettier"}] |
contributing:contributing-formatting-black | contributing | contributing-formatting-black | Running Black | Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: black . --check All done! ✨ 🍰 ✨ 95 files would be left unchanged. If any of your code does not conform to Black you can run this to automatically fix those problems: black . reformatted ../datasette/setup.py All done! ✨ 🍰 ✨ 1 file reformatted, 94 files left unchanged. | ["Contributing", "Code formatting"] | [] |
contributing:contributing-formatting-blacken-docs | contributing | contributing-formatting-blacken-docs | blacken-docs | The blacken-docs command applies Black formatting rules to code examples in the documentation. Run it like this: blacken-docs -l 60 docs/*.rst | ["Contributing", "Code formatting"] | [{"href": "https://pypi.org/project/blacken-docs/", "label": "blacken-docs"}] |
contributing:contributing-formatting-prettier | contributing | contributing-formatting-prettier | Prettier | To install Prettier, install Node.js and then run the following in the root of your datasette repository checkout: npm install This will install Prettier in a node_modules directory. You can then check that your code matches the coding style like so: npm run prettier -- --check > prettier > prettier 'datasette/static/*[!.min].js' "--check" Checking formatting... [warn] datasette/static/plugins.js [warn] Code style issues found in the above file(s). Forgot to run Prettier? You can fix any problems by running: npm run fix | ["Contributing", "Code formatting"] | [{"href": "https://nodejs.org/en/download/package-manager/", "label": "install Node.js"}] |
contributing:contributing-release | contributing | contributing-release | Release process | Datasette releases are performed using tags. When a new release is published on GitHub, a GitHub Action workflow will perform the following: Run the unit tests against all supported Python versions. If the tests pass... Build a Docker image of the release and push a tag to https://hub.docker.com/r/datasetteproject/datasette Re-point the "latest" tag on Docker Hub to the new image Build a wheel bundle of the underlying Python source code Push that new wheel up to PyPI: https://pypi.org/project/datasette/ If the release is an alpha, navigate to https://readthedocs.org/projects/datasette/versions/ and search for the tag name in the "Activate a version" filter, then mark that version as "active" to ensure it will appear on the public ReadTheDocs documentation site. To deploy new releases you will need to have push access to the main Datasette GitHub repository. Datasette follows Semantic Versioning : major.minor.patch We increment major for backwards-incompatible releases. Datasette is currently pre-1.0 so the major version is always 0 . We increment minor for new features. We increment patch for bugfix releass. Alpha and beta releases may have an additional a0 or b0 prefix - the integer component will be incremented with each subsequent alpha or beta. To release a new version, first create a commit that updates the version number in datasette/version.py and the the changelog with highlights of the new version. An example commit can be seen here : # Update changelog git commit -m " Release 0.51a1 Ref… | ["Contributing"] | [{"href": "https://github.com/simonw/datasette/blob/main/.github/workflows/deploy-latest.yml", "label": "GitHub Action workflow"}, {"href": "https://hub.docker.com/r/datasetteproject/datasette", "label": "https://hub.docker.com/r/datasetteproject/datasette"}, {"href": "https://pypi.org/project/datasette/", "label": "https://pypi.org/project/datasette/"}, {"href": "https://readthedocs.org/projects/datasette/versions/", "label": "https://readthedocs.org/projects/datasette/versions/"}, {"href": "https://semver.org/", "label": "Semantic Versioning"}, {"href": "https://github.com/simonw/datasette/commit/0e1e89c6ba3d0fbdb0823272952cf356f3016def", "label": "commit can be seen here"}, {"href": "https://github.com/simonw/datasette/issues/581#ref-commit-d56f402", "label": "here"}, {"href": "https://observablehq.com/@simonw/extract-issue-numbers-from-pasted-text", "label": "Extract issue numbers from pasted text"}, {"href": "https://github.com/simonw/datasette/releases/new", "label": "a new release"}, {"href": "https://euangoddard.github.io/clipboard2markdown/", "label": "Paste to Markdown tool"}, {"href": "https://github.com/simonw/datasette/compare/0.64.6...0.64.7", "label": "https://github.com/simonw/datasette/compare/0.64.6...0.64.7"}, {"href": "https://datasette.io/", "label": "datasette.io"}, {"href": "https://github.com/simonw/datasette.io/blob/main/news.yaml", "label": "news.yaml"}] |
contributing:contributing-running-tests | contributing | contributing-running-tests | Running the tests | Once you have done this, you can run the Datasette unit tests from inside your datasette/ directory using pytest like so: pytest You can run the tests faster using multiple CPU cores with pytest-xdist like this: pytest -n auto -m "not serial" -n auto detects the number of available cores automatically. The -m "not serial" skips tests that don't work well in a parallel test environment. You can run those tests separately like so: pytest -m "serial" | ["Contributing"] | [{"href": "https://docs.pytest.org/", "label": "pytest"}, {"href": "https://pypi.org/project/pytest-xdist/", "label": "pytest-xdist"}] |
contributing:contributing-upgrading-codemirror | contributing | contributing-upgrading-codemirror | Upgrading CodeMirror | Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror: Install the packages with: npm i codemirror @codemirror/lang-sql Build the bundle using the version number from package.json with: node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js \ -f iife \ -n cm \ -o datasette/static/cm-editor-6.0.1.bundle.js \ -p @rollup/plugin-node-resolve \ -p @rollup/plugin-terser Update the version reference in the codemirror.html template. | ["Contributing"] | [{"href": "https://codemirror.net/", "label": "CodeMirror"}, {"href": "https://latest.datasette.io/fixtures", "label": "this page"}] |
contributing:contributing-using-fixtures | contributing | contributing-using-fixtures | Using fixtures | To run Datasette itself, type datasette . You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. You can create a copy of that database by running this command: python tests/fixtures.py fixtures.db Now you can run Datasette against the new fixtures database like so: datasette fixtures.db This will start a server at http://127.0.0.1:8001/ . Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: datasette --reload fixtures.db You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: python tests/fixtures.py fixtures.db fixtures-metadata.json Or to output the plugins used by the tests, run this: python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins Test tables written to fixtures.db - metadata written to fixtures-metadata.json Wrote plugin: fixtures-plugins/register_output_renderer.py Wrote plugin: fixtures-plugins/view_name.py Wrote plugin: fixtures-plugins/my_plugin.py Wrote plugin: fixtures-plugins/messages_output_renderer.py Wrote plugin: fixtures-plugins/my_plugin_2.py Then run Datasette like this: datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/ | ["Contributing"] | [] |
changelog:control-http-caching-with-ttl | changelog | control-http-caching-with-ttl | Control HTTP caching with ?_ttl= | You can now customize the HTTP max-age header that is sent on a per-URL basis, using the new ?_ttl= query string parameter. You can set this to any value in seconds, or you can set it to 0 to disable HTTP caching entirely. Consider for example this query which returns a randomly selected member of the Avengers: select * from [avengers/avengers] order by random() limit 1 If you hit the following page repeatedly you will get the same result, due to HTTP caching: /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1 By adding ?_ttl=0 to the zero you can ensure the page will not be cached and get back a different super hero every time: /fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0 | ["Changelog", "0.23 (2018-06-18)"] | [{"href": "https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1", "label": "/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1"}, {"href": "https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0", "label": "/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random%28%29+limit+1&_ttl=0"}] |
changelog:cookie-methods | changelog | cookie-methods | Cookie methods | Plugins can now use the new response.set_cookie() method to set cookies. A new request.cookies method on the :ref:internals_request` can be used to read incoming cookies. | ["Changelog", "0.44 (2020-06-11)"] | [] |
authentication:createtokenview | authentication | createtokenview | API Tokens | Datasette includes a default mechanism for generating API tokens that can be used to authenticate requests. Authenticated users can create new API tokens using a form on the /-/create-token page. Tokens created in this way can be further restricted to only allow access to specific actions, or to limit those actions to specific databases, tables or queries. Created tokens can then be passed in the Authorization: Bearer $token header of HTTP requests to Datasette. A token created by a user will include that user's "id" in the token payload, so any permissions granted to that user based on their ID can be made available to the token as well. When one of these a token accompanies a request, the actor for that request will have the following shape: { "id": "user_id", "token": "dstok", "token_expires": 1667717426 } The "id" field duplicates the ID of the actor who first created the token. The "token" field identifies that this actor was authenticated using a Datasette signed token ( dstok ). The "token_expires" field, if present, indicates that the token will expire after that integer timestamp. The /-/create-token page cannot be accessed by actors that are authenticated with a "token": "some-value" property. This is to prevent API tokens from being used to create more tokens. Datasette plugins that implement their own form of API token authentication should follow this convention. You can disable the signed token feature entirely using the allow_signed_tokens setting. | ["Authentication and permissions"] | [] |
changelog:csrf-protection | changelog | csrf-protection | CSRF protection | Since writable canned queries are built using POST forms, Datasette now ships with CSRF protection ( #798 ). This applies automatically to any POST request, which means plugins need to include a csrftoken in any POST forms that they render. They can do that like so: <input type="hidden" name="csrftoken" value="{{ csrftoken() }}"> | ["Changelog", "0.44 (2020-06-11)"] | [{"href": "https://github.com/simonw/datasette/issues/798", "label": "#798"}] |
custom_templates:css-classes-on-the-body | custom_templates | css-classes-on-the-body | CSS classes on the <body> | Every default template includes CSS classes in the body designed to support custom styling. The index template (the top level page at / ) gets this: <body class="index"> The database template ( /dbname ) gets this: <body class="db db-dbname"> The custom SQL template ( /dbname?sql=... ) gets this: <body class="query db-dbname"> A canned query template ( /dbname/queryname ) gets this: <body class="query db-dbname query-queryname"> The table template ( /dbname/tablename ) gets: <body class="table db-dbname table-tablename"> The row template ( /dbname/tablename/rowid ) gets: <body class="row db-dbname table-tablename"> The db-x and table-x classes use the database or table names themselves if they are valid CSS identifiers. If they aren't, we strip any invalid characters out and append a 6 character md5 digest of the original name, in order to ensure that multiple tables which resolve to the same stripped character version still have different CSS classes. Some examples: "simple" => "simple" "MixedCase" => "MixedCase" "-no-leading-hyphens" => "no-leading-hyphens-65bea6" "_no-leading-underscores" => "no-leading-underscores-b921bc" "no spaces" => "no-spaces-7088d7" "-" => "336d5e" "no $ characters" => "no--characters-59e024" <td> and <th> elements also get custom CSS classes reflecting the database column they are representing, for example: <table> <thead> <tr> <th class="col-id" scope="col">id</th> <th class="col-name" scope="col">name</th> </tr> </thead> <tbody> <tr> <td class="col-id"><a href="...">1</a></td> <td class="col-name">SMITH</td> </tr> </tbody> </table> | ["Custom pages and templates"] | [] |
changelog:csv-export | changelog | csv-export | CSV export | Any Datasette table, view or custom SQL query can now be exported as CSV. Check out the CSV export documentation for more details, or try the feature out on https://fivethirtyeight.datasettes.com/fivethirtyeight/bechdel%2Fmovies If your table has more than max_returned_rows (default 1,000) Datasette provides the option to stream all rows . This option takes advantage of async Python and Datasette's efficient pagination to iterate through the entire matching result set and stream it back as a downloadable CSV file. | ["Changelog", "0.23 (2018-06-18)"] | [{"href": "https://fivethirtyeight.datasettes.com/fivethirtyeight/bechdel%2Fmovies", "label": "https://fivethirtyeight.datasettes.com/fivethirtyeight/bechdel%2Fmovies"}] |
csv_export:csv-export-url-parameters | csv_export | csv-export-url-parameters | URL parameters | The following options can be used to customize the CSVs returned by Datasette. ?_header=off This removes the first row of the CSV file specifying the headings - only the row data will be returned. ?_stream=on Stream all matching records, not just the first page of results. See below. ?_dl=on Causes Datasette to return a content-disposition: attachment; filename="filename.csv" header. | ["CSV export"] | [] |
custom_templates:custom-pages-404 | custom_templates | custom-pages-404 | Returning 404s | To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function: {% if not rows %} {{ raise_404("Content not found") }} {% endif %} If you call raise_404() the other content in your template will be ignored. | ["Custom pages and templates"] | [] |
custom_templates:custom-pages-errors | custom_templates | custom-pages-errors | Custom error pages | Datasette returns an error page if an unexpected error occurs, access is forbidden or content cannot be found. You can customize the response returned for these errors by providing a custom error page template. Content not found errors use a 404.html template. Access denied errors use 403.html . Invalid input errors use 400.html . Unexpected errors of other kinds use 500.html . If a template for the specific error code is not found a template called error.html will be used instead. If you do not provide that template Datasette's default error.html template will be used. The error template will be passed the following context: status - integer The integer HTTP status code, e.g. 404, 500, 403, 400. error - string Details of the specific error, usually a full sentence. title - string or None A title for the page representing the class of error. This is often None for errors that do not provide a title separate from their error message. | ["Custom pages and templates", "Custom redirects"] | [{"href": "https://github.com/simonw/datasette/blob/main/datasette/templates/error.html", "label": "default error.html template"}] |
custom_templates:custom-pages-headers | custom_templates | custom-pages-headers | Custom headers and status codes | Custom pages default to being served with a content-type of text/html; charset=utf-8 and a 200 status code. You can change these by calling a custom function from within your template. For example, to serve a custom page with a 418 I'm a teapot HTTP status code, create a file in pages/teapot.html containing the following: {{ custom_status(418) }} <html> <head><title>Teapot</title></head> <body> I'm a teapot </body> </html> To serve a custom HTTP header, add a custom_header(name, value) function call. For example: {{ custom_status(418) }} {{ custom_header("x-teapot", "I am") }} <html> <head><title>Teapot</title></head> <body> I'm a teapot </body> </html> You can verify this is working using curl like this: curl -I 'http://127.0.0.1:8001/teapot' HTTP/1.1 418 date: Sun, 26 Apr 2020 18:38:30 GMT server: uvicorn x-teapot: I am content-type: text/html; charset=utf-8 | ["Custom pages and templates"] | [] |
custom_templates:custom-pages-parameters | custom_templates | custom-pages-parameters | Path parameters for pages | You can define custom pages that match multiple paths by creating files with {variable} definitions in their filenames. For example, to capture any request to a URL matching /about/* , you would create a template in the following location: templates/pages/about/{slug}.html A hit to /about/news would render that template and pass in a variable called slug with a value of "news" . If you use this mechanism don't forget to return a 404 if the referenced content could not be found. You can do this using {{ raise_404() }} described below. Templates defined using custom page routes work particularly well with the sql() template function from datasette-template-sql or the graphql() template function from datasette-graphql . | ["Custom pages and templates"] | [{"href": "https://github.com/simonw/datasette-template-sql", "label": "datasette-template-sql"}, {"href": "https://github.com/simonw/datasette-graphql#the-graphql-template-function", "label": "datasette-graphql"}] |
custom_templates:custom-pages-redirects | custom_templates | custom-pages-redirects | Custom redirects | You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html : {{ custom_redirect("https://github.com/simonw/datasette") }} Now requests to http://localhost:8001/datasette will result in a redirect. These redirects are served with a 302 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function: {{ custom_redirect("https://github.com/simonw/datasette", 301) }} | ["Custom pages and templates"] | [] |
custom_templates:customization | custom_templates | customization | Custom pages and templates | Datasette provides a number of ways of customizing the way data is displayed. | [] | [] |
custom_templates:customization-css | custom_templates | customization-css | Writing custom CSS | Custom templates need to take Datasette's default CSS into account. The pattern portfolio at /-/patterns ( example here ) is a useful reference for understanding the available CSS classes. The core class is particularly useful - you can apply this directly to a <input> or <button> element to get Datasette's default form styles, or you can apply it to a containing element (such as <form> ) to apply those styles to all of the form elements within it. | ["Custom pages and templates"] | [{"href": "https://latest.datasette.io/-/patterns", "label": "example here"}] |
custom_templates:customization-custom-templates | custom_templates | customization-custom-templates | Custom templates | By default, Datasette uses default templates that ship with the package. You can over-ride these templates by specifying a custom --template-dir like this: datasette mydb.db --template-dir=mytemplates/ Datasette will now first look for templates in that directory, and fall back on the defaults if no matches are found. It is also possible to over-ride templates on a per-database, per-row or per- table basis. The lookup rules Datasette uses are as follows: Index page (/): index.html Database page (/mydatabase): database-mydatabase.html database.html Custom query page (/mydatabase?sql=...): query-mydatabase.html query.html Canned query page (/mydatabase/canned-query): query-mydatabase-canned-query.html query-mydatabase.html query.html Table page (/mydatabase/mytable): table-mydatabase-mytable.html table.html Row page (/mydatabase/mytable/id): row-mydatabase-mytable.html row.html Table of rows and columns include on table page: _table-table-mydatabase-mytable.html _table-mydatabase-mytable.html _table.html Table of rows and columns include on row page: _table-row-mydatabase-mytable.html _table-mydatabase-mytable.html _table.html If a table name has spaces or other unexpected characters in it, the template filename will follow the same rules as our custom <body> CSS classes - for example, a table called "Food Trucks" will attempt to load the following templates: table-mydatabase-Food-Trucks-399138.html table.html You can find out which templates were considered for a specific page by viewing source on that page and looking for an HTML comment at the bottom. The comment will look something like this: <!-… | ["Custom pages and templates", "Publishing static assets"] | [{"href": "https://github.com/simonw/datasette/blob/main/datasette/templates/_table.html", "label": "can be seen here"}] |
custom_templates:customization-static-files | custom_templates | customization-static-files | Serving static files | Datasette can serve static files for you, using the --static option. Consider the following directory structure: metadata.json static-files/styles.css static-files/app.js You can start Datasette using --static assets:static-files/ to serve those files from the /assets/ mount point: datasette --config datasette.yaml --static assets:static-files/ --memory The following URLs will now serve the content from those CSS and JS files: http://localhost:8001/assets/styles.css http://localhost:8001/assets/app.js You can reference those files from datasette.yaml like this, see custom CSS and JavaScript for more details: [[[cog from metadata_doc import config_example config_example(cog, """ extra_css_urls: - /assets/styles.css extra_js_urls: - /assets/app.js """) ]]] [[[end]]] | ["Custom pages and templates"] | [] |
internals:database-close | internals | database-close | db.close() | Closes all of the open connections to file-backed databases. This is mainly intended to be used by large test suites, to avoid hitting limits on the number of open files. | ["Internals for plugins", "Database class"] | [] |
internals:database-constructor | internals | database-constructor | Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None) | The Database() constructor can be used by plugins, in conjunction with .add_database(db, name=None, route=None) , to create and register new databases. The arguments are as follows: ds - Datasette class (required) The Datasette instance you are attaching this database to. path - string Path to a SQLite database file on disk. is_mutable - boolean Set this to False to cause Datasette to open the file in immutable mode. is_memory - boolean Use this to create non-shared memory connections. memory_name - string or None Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process. The first argument is the datasette instance you are attaching to, the second is a path= , then is_mutable and is_memory are both optional arguments. | ["Internals for plugins", "Database class"] | [] |
internals:database-execute | internals | database-execute | await db.execute(sql, ...) | Executes a SQL query against the database and returns the resulting rows (see Results ). sql - string (required) The SQL query to execute. This can include ? or :named parameters. params - list or dict A list or dictionary of values to use for the parameters. List for ? , dictionary for :named . truncate - boolean Should the rows returned by the query be truncated at the maximum page size? Defaults to True , set this to False to disable truncation. custom_time_limit - integer ms A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a dataette.database.QueryInterrupted exception. page_size - integer Set a custom page size for truncation, over-riding the configured Datasette default. log_sql_errors - boolean Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to True . | ["Internals for plugins", "Database class"] | [] |
internals:database-execute-fn | internals | database-execute-fn | await db.execute_fn(fn) | Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the await . Example usage: def get_version(conn): return conn.execute( "select sqlite_version()" ).fetchall()[0][0] version = await db.execute_fn(get_version) | ["Internals for plugins", "Database class"] | [] |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [sections] ( [id] TEXT PRIMARY KEY, [page] TEXT, [ref] TEXT, [title] TEXT, [content] TEXT, [breadcrumbs] TEXT, [references] TEXT );