390 rows

View and edit SQL

id ▼ page ref title content breadcrumbs references
changelog:id50 changelog id50 0.29.1 (2019-07-11) Fixed bug with static mounts using relative paths which could lead to traversal exploits ( #555 ) - thanks Abdussamet Kocak! Datasette can now be run as a module: python -m datasette ( #556 ) - thanks, Abdussamet Kocak! ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/555", "label": "#555"}, {"href": "https://github.com/simonw/datasette/issues/556", "label": "#556"}]
changelog:id51 changelog id51 0.29 (2019-07-07) ASGI, new plugin hooks, facet by date and much, much more... ["Changelog"] []
changelog:id52 changelog id52 0.28 (2019-05-19) A salmagundi of new features! ["Changelog"] [{"href": "https://adamj.eu/tech/2019/01/18/a-salmagundi-of-django-alpha-announcements/", "label": "salmagundi"}]
changelog:id53 changelog id53 Small changes We now show the size of the database file next to the download link ( #172 ) New /-/databases introspection page shows currently connected databases ( #470 ) Binary data is no longer displayed on the table and row pages ( #442 - thanks, Russ Garrett) New show/hide SQL links on custom query pages ( #415 ) The extra_body_script plugin hook now accepts an optional view_name argument ( #443 - thanks, Russ Garrett) Bumped Jinja2 dependency to 2.10.1 ( #426 ) All table filters are now documented, and documentation is enforced via unit tests ( 2c19a27 ) New project guideline: master should stay shippable at all times! ( 31f36e1 ) Fixed a bug where sqlite_timelimit() occasionally failed to clean up after itself ( bac4e01 ) We no longer load additional plugins when executing pytest ( #438 ) Homepage now links to database views if there are less than five tables in a database ( #373 ) The --cors option is now respected by error pages ( #453 ) datasette publish heroku now uses the --include-vcs-ignore option, which means it works under Travis CI ( #407 ) datasette publish heroku now publishes using Python 3.6.8 ( 66… ["Changelog", "0.28 (2019-05-19)"] [{"href": "https://github.com/simonw/datasette/issues/172", "label": "#172"}, {"href": "https://github.com/simonw/datasette/issues/470", "label": "#470"}, {"href": "https://github.com/simonw/datasette/pull/442", "label": "#442"}, {"href": "https://github.com/simonw/datasette/issues/415", "label": "#415"}, {"href": "https://github.com/simonw/datasette/pull/443", "label": "#443"}, {"href": "https://github.com/simonw/datasette/pull/426", "label": "#426"}, {"href": "https://github.com/simonw/datasette/commit/2c19a27d15a913e5f3dd443f04067169a6f24634", "label": "2c19a27"}, {"href": "https://github.com/simonw/datasette/commit/31f36e1b97ccc3f4387c80698d018a69798b6228", "label": "31f36e1"}, {"href": "https://github.com/simonw/datasette/commit/bac4e01f40ae7bd19d1eab1fb9349452c18de8f5", "label": "bac4e01"}, {"href": "https://github.com/simonw/datasette/issues/438", "label": "#438"}, {"href": "https://github.com/simonw/datasette/issues/373", "label": "#373"}, {"href": "https://github.com/simonw/datasette/issues/453", "label": "#453"}, {"href": "https://github.com/simonw/datasette/pull/407", "label": "#407"}, {"href": "https://github.com/simonw/datasette/commit/666c37415a898949fae0437099d62a35b1e9c430", "label": "666c374"}, {"href": "https://github.com/simonw/datasette/issues/472", "label": "#472"}, {"href": "https://github.com/simonw/datasette/commit/09ef305c687399384fe38487c075e8669682deb4", "label": "09ef305"}, {"href": "https://github.com/simonw/datasette/issues/476", "label": "#476"}]
changelog:id54 changelog id54 0.27.1 (2019-05-09) Tiny bugfix release: don't install tests/ in the wrong place. Thanks, Veit Heller. ["Changelog"] []
changelog:id55 changelog id55 0.27 (2019-01-31) New command: datasette plugins ( documentation ) shows you the currently installed list of plugins. Datasette can now output newline-delimited JSON using the new ?_shape=array&_nl=on query string option. Added documentation on The Datasette Ecosystem . Now using Python 3.7.2 as the base for the official Datasette Docker image . ["Changelog"] [{"href": "http://ndjson.org/", "label": "newline-delimited JSON"}, {"href": "https://hub.docker.com/r/datasetteproject/datasette/", "label": "Datasette Docker image"}]
changelog:id56 changelog id56 0.26.1 (2019-01-10) /-/versions now includes SQLite compile_options ( #396 ) datasetteproject/datasette Docker image now uses SQLite 3.26.0 ( #397 ) Cleaned up some deprecation warnings under Python 3.7 ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/396", "label": "#396"}, {"href": "https://hub.docker.com/r/datasetteproject/datasette", "label": "datasetteproject/datasette"}, {"href": "https://github.com/simonw/datasette/issues/397", "label": "#397"}]
changelog:id57 changelog id57 0.26 (2019-01-02) datasette serve --reload now restarts Datasette if a database file changes on disk. datasette publish now now takes an optional --alias mysite.now.sh argument. This will attempt to set an alias after the deploy completes. Fixed a bug where the advanced CSV export form failed to include the currently selected filters ( #393 ) ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/393", "label": "#393"}]
changelog:id58 changelog id58 0.25.2 (2018-12-16) datasette publish heroku now uses the python-3.6.7 runtime Added documentation on how to build the documentation Added documentation covering our release process Upgraded to pytest 4.0.2 ["Changelog"] []
changelog:id59 changelog id59 0.25.1 (2018-11-04) Documentation improvements plus a fix for publishing to Zeit Now. datasette publish now now uses Zeit's v1 platform, to work around the new 100MB image limit. Thanks, @slygent - closes #366 . ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/366", "label": "#366"}]
changelog:id6 changelog id6 0.52.5 (2020-12-09) Fix for error caused by combining the _searchmode=raw and ?_search_COLUMN parameters. ( #1134 ) ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/1134", "label": "#1134"}]
changelog:id60 changelog id60 0.25 (2018-09-19) New plugin hooks, improved database view support and an easier way to use more recent versions of SQLite. New publish_subcommand plugin hook. A plugin can now add additional datasette publish publishers in addition to the default now and heroku , both of which have been refactored into default plugins. publish_subcommand documentation . Closes #349 New render_cell plugin hook. Plugins can now customize how values are displayed in the HTML tables produced by Datasette's browseable interface. datasette-json-html and datasette-render-images are two new plugins that use this hook. render_cell documentation . Closes #352 New extra_body_script plugin hook, enabling plugins to provide additional JavaScript that should be added to the page footer. extra_body_script documentation . extra_css_urls and extra_js_urls hooks now take additional optional parameters, allowing them to be more selective about which pages they apply to. Documentation . You can now use the sortable_columns metadata setting to explicitly enable sort-by-column in the interface for database views, as well as for specific tables. The new fts_table and fts_pk metadata settings can now be used to explicitly configure full-text search for a table or a view , even if that table is not directly coupled to the SQLite FTS feature in the database schema itself. Datasette will now use pysqlite3 in place of the standard library sqlite3 module if it has been installed in the current environment. This makes it much easier to run Datasette against a more recent version of SQLite, including the just-released SQLite 3.25.0 which adds win… ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/349", "label": "#349"}, {"href": "https://github.com/simonw/datasette-json-html", "label": "datasette-json-html"}, {"href": "https://github.com/simonw/datasette-render-images", "label": "datasette-render-images"}, {"href": "https://github.com/simonw/datasette/issues/352", "label": "#352"}, {"href": "https://github.com/coleifer/pysqlite3", "label": "pysqlite3"}, {"href": "https://www.sqlite.org/releaselog/3_25_0.html", "label": "SQLite 3.25.0"}, {"href": "https://github.com/simonw/datasette/issues/360", "label": "#360"}]
changelog:id61 changelog id61 0.24 (2018-07-23) A number of small new features: datasette publish heroku now supports --extra-options , fixes #334 Custom error message if SpatiaLite is needed for specified database, closes #331 New config option: truncate_cells_html for truncating long cell values in HTML view - closes #330 Documentation for datasette publish and datasette package , closes #337 Fixed compatibility with Python 3.7 datasette publish heroku now supports app names via the -n option, which can also be used to overwrite an existing application [Russ Garrett] Title and description metadata can now be set for canned SQL queries , closes #342 New force_https_on config option, fixes https:// API URLs when deploying to Zeit Now - closes #333 ?_json_infinity=1 query string argument for handling Infinity/-Infinity values in JSON, closes #332 URLs displayed in the results of custom SQL queries are now URLified, closes #298 ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/334", "label": "#334"}, {"href": "https://github.com/simonw/datasette/issues/331", "label": "#331"}, {"href": "https://github.com/simonw/datasette/issues/330", "label": "#330"}, {"href": "https://github.com/simonw/datasette/issues/337", "label": "#337"}, {"href": "https://github.com/simonw/datasette/issues/342", "label": "#342"}, {"href": "https://github.com/simonw/datasette/issues/333", "label": "#333"}, {"href": "https://github.com/simonw/datasette/issues/332", "label": "#332"}, {"href": "https://github.com/simonw/datasette/issues/298", "label": "#298"}]
changelog:id7 changelog id7 0.52.4 (2020-12-05) Show pysqlite3 version on /-/versions , if installed. ( #1125 ) Errors output by Datasette (e.g. for invalid SQL queries) now go to stderr , not stdout . ( #1131 ) Fix for a startup error on windows caused by unnecessary from os import EX_CANTCREAT - thanks, Abdussamet Koçak. ( #1094 ) ["Changelog"] [{"href": "https://github.com/coleifer/pysqlite3", "label": "pysqlite3"}, {"href": "https://github.com/simonw/datasette/issues/1125", "label": "#1125"}, {"href": "https://github.com/simonw/datasette/issues/1131", "label": "#1131"}, {"href": "https://github.com/simonw/datasette/issues/1094", "label": "#1094"}]
changelog:id70 changelog id70 0.23.2 (2018-07-07) Minor bugfix and documentation release. CSV export now respects --cors , fixes #326 Installation instructions , including docker image - closes #328 Fix for row pages for tables with / in, closes #325 ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/326", "label": "#326"}, {"href": "https://github.com/simonw/datasette/issues/328", "label": "#328"}, {"href": "https://github.com/simonw/datasette/issues/325", "label": "#325"}]
changelog:id74 changelog id74 0.23.1 (2018-06-21) Minor bugfix release. Correctly display empty strings in HTML table, closes #314 Allow "." in database filenames, closes #302 404s ending in slash redirect to remove that slash, closes #309 Fixed incorrect display of compound primary keys with foreign key references. Closes #319 Docs + example of canned SQL query using || concatenation. Closes #321 Correctly display facets with value of 0 - closes #318 Default 'expand labels' to checked in CSV advanced export ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/314", "label": "#314"}, {"href": "https://github.com/simonw/datasette/issues/302", "label": "#302"}, {"href": "https://github.com/simonw/datasette/issues/309", "label": "#309"}, {"href": "https://github.com/simonw/datasette/issues/319", "label": "#319"}, {"href": "https://github.com/simonw/datasette/issues/321", "label": "#321"}, {"href": "https://github.com/simonw/datasette/issues/318", "label": "#318"}]
changelog:id8 changelog id8 0.52.3 (2020-12-03) Fixed bug where static assets would 404 for Datasette installed on ARM Amazon Linux. ( #1124 ) ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/1124", "label": "#1124"}]
changelog:id81 changelog id81 0.23 (2018-06-18) This release features CSV export, improved options for foreign key expansions, new configuration settings and improved support for SpatiaLite. See datasette/compare/0.22.1...0.23 for a full list of commits added since the last release. ["Changelog"] [{"href": "https://github.com/simonw/datasette/compare/0.22.1...0.23", "label": "datasette/compare/0.22.1...0.23"}]
changelog:id82 changelog id82 0.22.1 (2018-05-23) Bugfix release, plus we now use versioneer for our version numbers. Faceting no longer breaks pagination, fixes #282 Add __version_info__ derived from __version__ [Robert Gieseke] This might be tuple of more than two values (major and minor version) if commits have been made after a release. Add version number support with Versioneer. [Robert Gieseke] Versioneer Licence: Public Domain (CC0-1.0) Closes #273 Refactor inspect logic [Russ Garrett] ["Changelog"] [{"href": "https://github.com/warner/python-versioneer", "label": "versioneer"}, {"href": "https://github.com/simonw/datasette/issues/282", "label": "#282"}, {"href": "https://github.com/simonw/datasette/issues/273", "label": "#273"}]
changelog:id85 changelog id85 0.22 (2018-05-20) The big new feature in this release is Facets . Datasette can now apply faceted browse to any column in any table. It will also suggest possible facets. See the Datasette Facets announcement post for more details. In addition to the work on facets: Added docs for introspection endpoints New --config option, added --help-config , closes #274 Removed the --page_size= argument to datasette serve in favour of: datasette serve --config default_page_size:50 mydb.db Added new help section: $ datasette --help-config Config options: default_page_size Default page size for the table view (default=100) max_returned_rows Maximum rows that can be returned from a table or custom query (default=1000) sql_time_limit_ms Time limit for a SQL query in milliseconds (default=1000) default_facet_size Number of values to return for requested facets (default=30) facet_time_limit_ms Time limit for calculating a requested facet (default=200) facet_suggest_time_limit_ms Time limit for calculating a suggested facet (default=50) Only apply responsive table styles to .rows-and-column Otherwise they interfere with tables in the description, e.g. on https://fivethirtyeight.datasettes.com/fivethirtyeight/nba-elo%2Fnbaallelo Refactored views into new views/ modules, refs #256 Documentation for SQLite full-text search support, closes #253 … ["Changelog"] [{"href": "https://simonwillison.net/2018/May/20/datasette-facets/", "label": "Datasette Facets"}, {"href": "https://docs.datasette.io/en/stable/introspection.html", "label": "docs for introspection endpoints"}, {"href": "https://github.com/simonw/datasette/issues/274", "label": "#274"}, {"href": "https://fivethirtyeight.datasettes.com/fivethirtyeight/nba-elo%2Fnbaallelo", "label": "https://fivethirtyeight.datasettes.com/fivethirtyeight/nba-elo%2Fnbaallelo"}, {"href": "https://github.com/simonw/datasette/issues/256", "label": "#256"}, {"href": "https://docs.datasette.io/en/stable/full_text_search.html", "label": "Documentation for SQLite full-text search"}, {"href": "https://github.com/simonw/datasette/issues/253", "label": "#253"}, {"href": "https://github.com/simonw/datasette/issues/252", "label": "#252"}]
changelog:id9 changelog id9 0.52.2 (2020-12-02) Generated columns from SQLite 3.31.0 or higher are now correctly displayed. ( #1116 ) Error message if you attempt to open a SpatiaLite database now suggests using --load-extension=spatialite if it detects that the extension is available in a common location. ( #1115 ) OPTIONS requests against the /database page no longer raise a 500 error. ( #1100 ) Databases larger than 32MB that are published to Cloud Run can now be downloaded. ( #749 ) Fix for misaligned cog icon on table and database pages. Thanks, Abdussamet Koçak. ( #1121 ) ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/1116", "label": "#1116"}, {"href": "https://github.com/simonw/datasette/issues/1115", "label": "#1115"}, {"href": "https://github.com/simonw/datasette/issues/1100", "label": "#1100"}, {"href": "https://github.com/simonw/datasette/issues/749", "label": "#749"}, {"href": "https://github.com/simonw/datasette/issues/1121", "label": "#1121"}]
changelog:id90 changelog id90 0.21 (2018-05-05) New JSON _shape= options, the ability to set table _size= and a mechanism for searching within specific columns. Default tests to using a longer timelimit Every now and then a test will fail in Travis CI on Python 3.5 because it hit the default 20ms SQL time limit. Test fixtures now default to a 200ms time limit, and we only use the 20ms time limit for the specific test that tests query interruption. This should make our tests on Python 3.5 in Travis much more stable. Support _search_COLUMN=text searches, closes #237 Show version on /-/plugins page, closes #248 ?_size=max option, closes #249 Added /-/versions and /-/versions.json , closes #244 Sample output: { "python": { "version": "3.6.3", "full": "3.6.3 (default, Oct 4 2017, 06:09:38) \n[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]" }, "datasette": { "version": "0.20" }, "sqlite": { "version": "3.23.1", "extensions": { "json1": null, "spatialite": "4.3.0a" } } } Renamed ?_sql_time_limit_ms= to ?_timelimit , closes #242 New ?_shape=array option + tweaks to _shape , closes #245 Default is now ?_shape=arrays (renamed from lists ) New ?_shape=array returns an array of objects as the root object … ["Changelog"] [{"href": "https://github.com/simonw/datasette/issues/237", "label": "#237"}, {"href": "https://github.com/simonw/datasette/issues/248", "label": "#248"}, {"href": "https://github.com/simonw/datasette/issues/249", "label": "#249"}, {"href": "https://github.com/simonw/datasette/issues/244", "label": "#244"}, {"href": "https://github.com/simonw/datasette/issues/242", "label": "#242"}, {"href": "https://github.com/simonw/datasette/issues/245", "label": "#245"}, {"href": "https://github.com/simonw/datasette/issues/240", "label": "#240"}, {"href": "https://github.com/simonw/datasette/issues/229", "label": "#229"}, {"href": "https://github.com/simonw/datasette/issues/230", "label": "#230"}, {"href": "https://github.com/simonw/datasette/issues/239", "label": "#239"}, {"href": "https://github.com/simonw/datasette/issues/228", "label": "#228"}, {"href": "https://github.com/simonw/datasette/issues/234", "label": "#234"}]
changelog:improved-support-for-spatialite changelog improved-support-for-spatialite Improved support for SpatiaLite The SpatiaLite module for SQLite adds robust geospatial features to the database. Getting SpatiaLite working can be tricky, especially if you want to use the most recent alpha version (with support for K-nearest neighbor). Datasette now includes extensive documentation on SpatiaLite , and thanks to Ravi Kotecha our GitHub repo includes a Dockerfile that can build the latest SpatiaLite and configure it for use with Datasette. The datasette publish and datasette package commands now accept a new --spatialite argument which causes them to install and configure SpatiaLite as part of the container they deploy. ["Changelog", "0.23 (2018-06-18)"] [{"href": "https://www.gaia-gis.it/fossil/libspatialite/index", "label": "SpatiaLite module"}, {"href": "https://github.com/r4vi", "label": "Ravi Kotecha"}, {"href": "https://github.com/simonw/datasette/blob/master/Dockerfile", "label": "Dockerfile"}]
changelog:javascript-modules changelog javascript-modules JavaScript modules JavaScript modules were introduced in ECMAScript 2015 and provide native browser support for the import and export keywords. To use modules, JavaScript needs to be included in <script> tags with a type="module" attribute. Datasette now has the ability to output <script type="module"> in places where you may wish to take advantage of modules. The extra_js_urls option described in Custom CSS and JavaScript can now be used with modules, and module support is also available for the extra_body_script() plugin hook. ( #1186 , #1187 ) datasette-leaflet-freedraw is the first example of a Datasette plugin that takes advantage of the new support for JavaScript modules. See Drawing shapes on a map to query a SpatiaLite database for more on this plugin. ["Changelog", "0.54 (2021-01-25)"] [{"href": "https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules", "label": "JavaScript modules"}, {"href": "https://github.com/simonw/datasette/issues/1186", "label": "#1186"}, {"href": "https://github.com/simonw/datasette/issues/1187", "label": "#1187"}, {"href": "https://datasette.io/plugins/datasette-leaflet-freedraw", "label": "datasette-leaflet-freedraw"}, {"href": "https://simonwillison.net/2021/Jan/24/drawing-shapes-spatialite/", "label": "Drawing shapes on a map to query a SpatiaLite database"}]
changelog:latest-datasette-io changelog latest-datasette-io latest.datasette.io Every commit to Datasette master is now automatically deployed by Travis CI to https://latest.datasette.io/ - ensuring there is always a live demo of the latest version of the software. The demo uses the fixtures from our unit tests, ensuring it demonstrates the same range of functionality that is covered by the tests. You can see how the deployment mechanism works in our .travis.yml file. ["Changelog", "0.23 (2018-06-18)"] [{"href": "https://latest.datasette.io/", "label": "https://latest.datasette.io/"}, {"href": "https://github.com/simonw/datasette/blob/master/tests/fixtures.py", "label": "the fixtures"}, {"href": "https://github.com/simonw/datasette/blob/master/.travis.yml", "label": ".travis.yml"}]
changelog:log-out changelog log-out Log out The ds_actor cookie can be used by plugins (or by Datasette's --root mechanism ) to authenticate users. The new /-/logout page provides a way to clear that cookie. A "Log out" button now shows in the global navigation provided the user is authenticated using the ds_actor cookie. ( #840 ) ["Changelog", "0.45 (2020-07-01)"] [{"href": "https://github.com/simonw/datasette/issues/840", "label": "#840"}]
changelog:magic-parameters-for-canned-queries changelog magic-parameters-for-canned-queries Magic parameters for canned queries Canned queries now support Magic parameters , which can be used to insert or select automatically generated values. For example: insert into logs (user_id, timestamp) values (:_actor_id, :_now_datetime_utc) This inserts the currently authenticated actor ID and the current datetime. ( #842 ) ["Changelog", "0.45 (2020-07-01)"] [{"href": "https://github.com/simonw/datasette/issues/842", "label": "#842"}]
changelog:miscellaneous changelog miscellaneous Miscellaneous Got JSON data in one of your columns? Use the new ?_json=COLNAME argument to tell Datasette to return that JSON value directly rather than encoding it as a string. If you just want an array of the first value of each row, use the new ?_shape=arrayfirst option - example . ["Changelog", "0.23 (2018-06-18)"] [{"href": "https://latest.datasette.io/fixtures.json?sql=select+neighborhood+from+facetable+order+by+pk+limit+101&_shape=arrayfirst", "label": "example"}]
changelog:named-in-memory-database-support changelog named-in-memory-database-support Named in-memory database support As part of the work building the _internal database, Datasette now supports named in-memory databases that can be shared across multiple connections. This allows plugins to create in-memory databases which will persist data for the lifetime of the Datasette server process. ( #1151 ) The new memory_name= parameter to the Database class can be used to create named, shared in-memory databases. ["Changelog", "0.54 (2021-01-25)"] [{"href": "https://github.com/simonw/datasette/issues/1151", "label": "#1151"}]
changelog:new-configuration-settings changelog new-configuration-settings New configuration settings Datasette's Settings now also supports boolean settings. A number of new configuration options have been added: num_sql_threads - the number of threads used to execute SQLite queries. Defaults to 3. allow_facet - enable or disable custom Facets using the _facet= parameter. Defaults to on. suggest_facets - should Datasette suggest facets? Defaults to on. allow_download - should users be allowed to download the entire SQLite database? Defaults to on. allow_sql - should users be allowed to execute custom SQL queries? Defaults to on. default_cache_ttl - Default HTTP caching max-age header in seconds. Defaults to 365 days - caching can be disabled entirely by settings this to 0. cache_size_kb - Set the amount of memory SQLite uses for its per-connection cache , in KB. allow_csv_stream - allow users to stream entire result sets as a single CSV file. Defaults to on. max_csv_mb - maximum size of a returned CSV file in MB. Defaults to 100MB, set to 0 to disable this limit. ["Changelog", "0.23 (2018-06-18)"] [{"href": "https://www.sqlite.org/pragma.html#pragma_cache_size", "label": "per-connection cache"}]
changelog:new-plugin-hook-asgi-wrapper changelog new-plugin-hook-asgi-wrapper New plugin hook: asgi_wrapper The asgi_wrapper(datasette) plugin hook allows plugins to entirely wrap the Datasette ASGI application in their own ASGI middleware. ( #520 ) Two new plugins take advantage of this hook: datasette-auth-github adds a authentication layer: users will have to sign in using their GitHub account before they can view data or interact with Datasette. You can also use it to restrict access to specific GitHub users, or to members of specified GitHub organizations or teams . datasette-cors allows you to configure CORS headers for your Datasette instance. You can use this to enable JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance. ["Changelog", "0.29 (2019-07-07)"] [{"href": "https://github.com/simonw/datasette/issues/520", "label": "#520"}, {"href": "https://github.com/simonw/datasette-auth-github", "label": "datasette-auth-github"}, {"href": "https://help.github.com/en/articles/about-organizations", "label": "organizations"}, {"href": "https://help.github.com/en/articles/organizing-members-into-teams", "label": "teams"}, {"href": "https://github.com/simonw/datasette-cors", "label": "datasette-cors"}, {"href": "https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS", "label": "CORS headers"}]
changelog:new-plugin-hook-extra-template-vars changelog new-plugin-hook-extra-template-vars New plugin hook: extra_template_vars The extra_template_vars(template, database, table, columns, view_name, request, datasette) plugin hook allows plugins to inject their own additional variables into the Datasette template context. This can be used in conjunction with custom templates to customize the Datasette interface. datasette-auth-github uses this hook to add custom HTML to the new top navigation bar (which is designed to be modified by plugins, see #540 ). ["Changelog", "0.29 (2019-07-07)"] [{"href": "https://github.com/simonw/datasette-auth-github", "label": "datasette-auth-github"}, {"href": "https://github.com/simonw/datasette/issues/540", "label": "#540"}]
changelog:new-plugin-hooks changelog new-plugin-hooks New plugin hooks register_magic_parameters(datasette) can be used to define new types of magic canned query parameters. startup(datasette) can run custom code when Datasette first starts up. datasette-init is a new plugin that uses this hook to create database tables and views on startup if they have not yet been created. ( #834 ) canned_queries(datasette, database, actor) lets plugins provide additional canned queries beyond those defined in Datasette's metadata. See datasette-saved-queries for an example of this hook in action. ( #852 ) forbidden(datasette, request, message) is a hook for customizing how Datasette responds to 403 forbidden errors. ( #812 ) ["Changelog", "0.45 (2020-07-01)"] [{"href": "https://github.com/simonw/datasette-init", "label": "datasette-init"}, {"href": "https://github.com/simonw/datasette/issues/834", "label": "#834"}, {"href": "https://github.com/simonw/datasette-saved-queries", "label": "datasette-saved-queries"}, {"href": "https://github.com/simonw/datasette/issues/852", "label": "#852"}, {"href": "https://github.com/simonw/datasette/issues/812", "label": "#812"}]
changelog:new-visual-design changelog new-visual-design New visual design Datasette is no longer white and grey with blue and purple links! Natalie Downe has been working on a visual refresh, the first iteration of which is included in this release. ( #1056 ) ["Changelog", "0.51 (2020-10-31)"] [{"href": "https://twitter.com/natbat", "label": "Natalie Downe"}, {"href": "https://github.com/simonw/datasette/pull/1056", "label": "#1056"}]
changelog:other-changes changelog other-changes Other changes Datasette can now open multiple database files with the same name, e.g. if you run datasette path/to/one.db path/to/other/one.db . ( #509 ) datasette publish cloudrun now sets force_https_urls for every deployment, fixing some incorrect http:// links. ( #1178 ) Fixed a bug in the example nginx configuration in Running Datasette behind a proxy . ( #1091 ) The Datasette Ecosystem documentation page has been reduced in size in favour of the datasette.io tools and plugins directories. ( #1182 ) The request object now provides a request.full_path property, which returns the path including any query string. ( #1184 ) Better error message for disallowed PRAGMA clauses in SQL queries. ( #1185 ) datasette publish heroku now deploys using python-3.8.7 . New plugin testing documentation on Testing outbound HTTP calls with pytest-httpx . ( #1198 ) All ?_* query string parameters passed to the table page are now persisted in hidden form fields, so parameters such as ?_size=10 will be correctly passed to the next page when query filters are changed. ( #1194 ) Fixed a bug loading a database file called test-database (1).sqlite . ( #1181 ) ["Changelog", "0.54 (2021-01-25)"] [{"href": "https://github.com/simonw/datasette/issues/509", "label": "#509"}, {"href": "https://github.com/simonw/datasette/issues/1178", "label": "#1178"}, {"href": "https://github.com/simonw/datasette/issues/1091", "label": "#1091"}, {"href": "https://datasette.io/tools", "label": "tools"}, {"href": "https://datasette.io/plugins", "label": "plugins"}, {"href": "https://github.com/simonw/datasette/issues/1182", "label": "#1182"}, {"href": "https://github.com/simonw/datasette/issues/1184", "label": "#1184"}, {"href": "https://github.com/simonw/datasette/issues/1185", "label": "#1185"}, {"href": "https://github.com/simonw/datasette/issues/1198", "label": "#1198"}, {"href": "https://github.com/simonw/datasette/issues/1194", "label": "#1194"}, {"href": "https://github.com/simonw/datasette/issues/1181", "label": "#1181"}]
changelog:permissions changelog permissions Permissions Datasette also now has a built-in concept of Permissions . The permissions system answers the following question: Is this actor allowed to perform this action , optionally against this particular resource ? You can use the new "allow" block syntax in metadata.json (or metadata.yaml ) to set required permissions at the instance, database, table or canned query level. For example, to restrict access to the fixtures.db database to the "root" user: { "databases": { "fixtures": { "allow": { "id" "root" } } } } See Defining permissions with "allow" blocks for more details. Plugins can implement their own custom permission checks using the new permission_allowed(datasette, actor, action, resource) hook. A new debug page at /-/permissions shows recent permission checks, to help administrators and plugin authors understand exactly what checks are being performed. This tool defaults to only being available to the root user, but can be exposed to other users by plugins that respond to the permissions-debug permission. ( #788 ) ["Changelog", "0.44 (2020-06-11)"] [{"href": "https://github.com/simonw/datasette/issues/788", "label": "#788"}]
changelog:plugins-can-now-add-links-within-datasette changelog plugins-can-now-add-links-within-datasette Plugins can now add links within Datasette A number of existing Datasette plugins add new pages to the Datasette interface, providig tools for things like uploading CSVs , editing table schemas or configuring full-text search . Plugins like this can now link to themselves from other parts of Datasette interface. The menu_links(datasette, actor) hook ( #1064 ) lets plugins add links to Datasette's new top-right application menu, and the table_actions(datasette, actor, database, table) hook ( #1066 ) adds links to a new "table actions" menu on the table page. The demo at latest.datasette.io now includes some example plugins. To see the new table actions menu first sign into that demo as root and then visit the facetable table to see the new cog icon menu at the top of the page. ["Changelog", "0.51 (2020-10-31)"] [{"href": "https://github.com/simonw/datasette-upload-csvs", "label": "uploading CSVs"}, {"href": "https://github.com/simonw/datasette-edit-schema", "label": "editing table schemas"}, {"href": "https://github.com/simonw/datasette-configure-fts", "label": "configuring full-text search"}, {"href": "https://github.com/simonw/datasette/issues/1064", "label": "#1064"}, {"href": "https://github.com/simonw/datasette/issues/1066", "label": "#1066"}, {"href": "https://latest.datasette.io/", "label": "latest.datasette.io"}, {"href": "https://latest.datasette.io/login-as-root", "label": "sign into that demo as root"}, {"href": "https://latest.datasette.io/fixtures/facetable", "label": "facetable"}]
changelog:register-routes-plugin-hooks changelog register-routes-plugin-hooks register_routes() plugin hooks Plugins can now register new views and routes via the register_routes() plugin hook ( #819 ). View functions can be defined that accept any of the current datasette object, the current request , or the ASGI scope , send and receive objects. ["Changelog", "0.44 (2020-06-11)"] [{"href": "https://github.com/simonw/datasette/issues/819", "label": "#819"}]
changelog:running-datasette-behind-a-proxy changelog running-datasette-behind-a-proxy Running Datasette behind a proxy The base_url configuration option is designed to help run Datasette on a specific path behind a proxy - for example if you want to run an instance of Datasette at /my-datasette/ within your existing site's URL hierarchy, proxied behind nginx or Apache. Support for this configuration option has been greatly improved ( #1023 ), and guidelines for using it are now available in a new documentation section on Running Datasette behind a proxy . ( #1027 ) ["Changelog", "0.51 (2020-10-31)"] [{"href": "https://github.com/simonw/datasette/issues/1023", "label": "#1023"}, {"href": "https://github.com/simonw/datasette/issues/1027", "label": "#1027"}]
changelog:secret-plugin-configuration-options changelog secret-plugin-configuration-options Secret plugin configuration options Plugins like datasette-auth-github need a safe way to set secret configuration options. Since the default mechanism for configuring plugins exposes those settings in /-/metadata a new mechanism was needed. Secret configuration values describes how plugins can now specify that their settings should be read from a file or an environment variable: { "plugins": { "datasette-auth-github": { "client_secret": { "$env": "GITHUB_CLIENT_SECRET" } } } } These plugin secrets can be set directly using datasette publish . See Custom metadata and plugins for details. ( #538 and #543 ) ["Changelog", "0.29 (2019-07-07)"] [{"href": "https://github.com/simonw/datasette-auth-github", "label": "datasette-auth-github"}, {"href": "https://github.com/simonw/datasette/issues/538", "label": "#538"}, {"href": "https://github.com/simonw/datasette/issues/543", "label": "#543"}]
changelog:signed-values-and-secrets changelog signed-values-and-secrets Signed values and secrets Both flash messages and user authentication needed a way to sign values and set signed cookies. Two new methods are now available for plugins to take advantage of this mechanism: .sign(value, namespace="default") and .unsign(value, namespace="default") . Datasette will generate a secret automatically when it starts up, but to avoid resetting the secret (and hence invalidating any cookies) every time the server restarts you should set your own secret. You can pass a secret to Datasette using the new --secret option or with a DATASETTE_SECRET environment variable. See Configuring the secret for more details. You can also set a secret when you deploy Datasette using datasette publish or datasette package - see Using secrets with datasette publish . Plugins can now sign values and verify their signatures using the datasette.sign() and datasette.unsign() methods. ["Changelog", "0.44 (2020-06-11)"] []
changelog:small-changes changelog small-changes Small changes Databases published using datasette publish now open in Immutable mode . ( #469 ) ?col__date= now works for columns containing spaces Automatic label detection (for deciding which column to show when linking to a foreign key) has been improved. ( #485 ) Fixed bug where pagination broke when combined with an expanded foreign key. ( #489 ) Contributors can now run pip install -e .[docs] to get all of the dependencies needed to build the documentation, including cd docs && make livehtml support. Datasette's dependencies are now all specified using the ~= match operator. ( #532 ) white-space: pre-wrap now used for table creation SQL. ( #505 ) Full list of commits between 0.28 and 0.29. ["Changelog", "0.29 (2019-07-07)"] [{"href": "https://github.com/simonw/datasette/issues/469", "label": "#469"}, {"href": "https://github.com/simonw/datasette/issues/485", "label": "#485"}, {"href": "https://github.com/simonw/datasette/issues/489", "label": "#489"}, {"href": "https://github.com/simonw/datasette/issues/532", "label": "#532"}, {"href": "https://github.com/simonw/datasette/issues/505", "label": "#505"}, {"href": "https://github.com/simonw/datasette/compare/0.28...0.29", "label": "Full list of commits"}]
changelog:smaller-changes changelog smaller-changes Smaller changes Wide tables shown within Datasette now scroll horizontally ( #998 ). This is achieved using a new <div class="table-wrapper"> element which may impact the implementation of some plugins (for example this change to datasette-cluster-map ). New debug-menu permission. ( #1068 ) Removed --debug option, which didn't do anything. ( #814 ) Link: HTTP header pagination. ( #1014 ) x button for clearing filters. ( #1016 ) Edit SQL button on canned queries, ( #1019 ) --load-extension=spatialite shortcut. ( #1028 ) scale-in animation for column action menu. ( #1039 ) Option to pass a list of templates to .render_template() is now documented. ( #1045 ) New datasette.urls.static_plugins() method. ( #1033 ) datasette -o option now opens the most relevant page. ( #976 ) datasette --cors option now enables access to /database.db downloads. ( #1057 ) Database file downloads now implement cascading permissions, so you can download a database if you have view-database-download permission even if you do not have permission to access the Datasette instance. ( #1058 ) New documentation on Designing URLs for your plugin . (… ["Changelog", "0.51 (2020-10-31)"] [{"href": "https://github.com/simonw/datasette/issues/998", "label": "#998"}, {"href": "https://github.com/simonw/datasette-cluster-map/commit/fcb4abbe7df9071c5ab57defd39147de7145b34e", "label": "this change to datasette-cluster-map"}, {"href": "https://github.com/simonw/datasette/issues/1068", "label": "#1068"}, {"href": "https://github.com/simonw/datasette/issues/814", "label": "#814"}, {"href": "https://github.com/simonw/datasette/issues/1014", "label": "#1014"}, {"href": "https://github.com/simonw/datasette/issues/1016", "label": "#1016"}, {"href": "https://github.com/simonw/datasette/issues/1019", "label": "#1019"}, {"href": "https://github.com/simonw/datasette/issues/1028", "label": "#1028"}, {"href": "https://github.com/simonw/datasette/issues/1039", "label": "#1039"}, {"href": "https://github.com/simonw/datasette/issues/1045", "label": "#1045"}, {"href": "https://github.com/simonw/datasette/issues/1033", "label": "#1033"}, {"href": "https://github.com/simonw/datasette/issues/976", "label": "#976"}, {"href": "https://github.com/simonw/datasette/issues/1057", "label": "#1057"}, {"href": "https://github.com/simonw/datasette/issues/1058", "label": "#1058"}, {"href": "https://github.com/simonw/datasette/issues/1053", "label": "#1053"}]
changelog:the-internal-database changelog the-internal-database The _internal database As part of ongoing work to help Datasette handle much larger numbers of connected databases and tables (see Datasette Library ) Datasette now maintains an in-memory SQLite database with details of all of the attached databases, tables, columns, indexes and foreign keys. ( #1150 ) This will support future improvements such as a searchable, paginated homepage of all available tables. You can explore an example of this database by signing in as root to the latest.datasette.io demo instance and then navigating to latest.datasette.io/_internal . Plugins can use these tables to introspect attached data in an efficient way. Plugin authors should note that this is not yet considered a stable interface, so any plugins that use this may need to make changes prior to Datasette 1.0 if the _internal table schemas change. ["Changelog", "0.54 (2021-01-25)"] [{"href": "https://github.com/simonw/datasette/issues/417", "label": "Datasette Library"}, {"href": "https://github.com/simonw/datasette/issues/1150", "label": "#1150"}, {"href": "https://latest.datasette.io/login-as-root", "label": "signing in as root"}, {"href": "https://latest.datasette.io/_internal", "label": "latest.datasette.io/_internal"}]
changelog:the-road-to-datasette-1-0 changelog the-road-to-datasette-1-0 The road to Datasette 1.0 I've assembled a milestone for Datasette 1.0 . The focus of the 1.0 release will be the following: Signify confidence in the quality/stability of Datasette Give plugin authors confidence that their plugins will work for the whole 1.x release cycle Provide the same confidence to developers building against Datasette JSON APIs If you have thoughts about what you would like to see for Datasette 1.0 you can join the conversation on issue #519 . ["Changelog", "0.44 (2020-06-11)"] [{"href": "https://github.com/simonw/datasette/milestone/7", "label": "milestone for Datasette 1.0"}, {"href": "https://github.com/simonw/datasette/issues/519", "label": "the conversation on issue #519"}]
changelog:through-for-joins-through-many-to-many-tables changelog through-for-joins-through-many-to-many-tables ?_through= for joins through many-to-many tables The new ?_through={json} argument to the Table view allows records to be filtered based on a many-to-many relationship. See Special table arguments for full documentation - here's an example . ( #355 ) This feature was added to help support facet by many-to-many , which isn't quite ready yet but will be coming in the next Datasette release. ["Changelog", "0.29 (2019-07-07)"] [{"href": "https://latest.datasette.io/fixtures/roadside_attractions?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22}", "label": "an example"}, {"href": "https://github.com/simonw/datasette/issues/355", "label": "#355"}, {"href": "https://github.com/simonw/datasette/issues/551", "label": "facet by many-to-many"}]
changelog:url-building changelog url-building URL building The new datasette.urls family of methods can be used to generate URLs to key pages within the Datasette interface, both within custom templates and Datasette plugins. See Building URLs within plugins for more details. ( #904 ) ["Changelog", "0.51 (2020-10-31)"] [{"href": "https://github.com/simonw/datasette/issues/904", "label": "#904"}]
changelog:v0-28-databases-that-change changelog v0-28-databases-that-change Supporting databases that change From the beginning of the project, Datasette has been designed with read-only databases in mind. If a database is guaranteed not to change it opens up all kinds of interesting opportunities - from taking advantage of SQLite immutable mode and HTTP caching to bundling static copies of the database directly in a Docker container. The interesting ideas in Datasette explores this idea in detail. As my goals for the project have developed, I realized that read-only databases are no longer the right default. SQLite actually supports concurrent access very well provided only one thread attempts to write to a database at a time, and I keep encountering sensible use-cases for running Datasette on top of a database that is processing inserts and updates. So, as-of version 0.28 Datasette no longer assumes that a database file will not change. It is now safe to point Datasette at a SQLite database which is being updated by another process. Making this change was a lot of work - see tracking tickets #418 , #419 and #420 . It required new thinking around how Datasette should calculate table counts (an expensive operation against a large, changing database) and also meant reconsidering the "content hash" URLs Datasette has used in the past to optimize the performance of HTTP caches. Datasette can still run against immutable files and gains numerous performance benefits from doing so, but this is no longer the default behaviour. Take a look at the new Performance and caching documentation section for details on how to make the most of Datasette against data that you know will be staying read-only and immutable. ["Changelog", "0.28 (2019-05-19)"] [{"href": "https://simonwillison.net/2018/Oct/4/datasette-ideas/", "label": "The interesting ideas in Datasette"}, {"href": "https://github.com/simonw/datasette/issues/418", "label": "#418"}, {"href": "https://github.com/simonw/datasette/issues/419", "label": "#419"}, {"href": "https://github.com/simonw/datasette/issues/420", "label": "#420"}]
changelog:v0-28-faceting changelog v0-28-faceting Faceting improvements, and faceting plugins Datasette Facets provide an intuitive way to quickly summarize and interact with data. Previously the only supported faceting technique was column faceting, but 0.28 introduces two powerful new capabilities: facet-by-JSON-array and the ability to define further facet types using plugins. Facet by array ( #359 ) is only available if your SQLite installation provides the json1 extension. Datasette will automatically detect columns that contain JSON arrays of values and offer a faceting interface against those columns - useful for modelling things like tags without needing to break them out into a new table. See Facet by JSON array for more. The new register_facet_classes() plugin hook ( #445 ) can be used to register additional custom facet classes. Each facet class should provide two methods: suggest() which suggests facet selections that might be appropriate for a provided SQL query, and facet_results() which executes a facet operation and returns results. Datasette's own faceting implementations have been refactored to use the same API as these plugins. ["Changelog", "0.28 (2019-05-19)"] [{"href": "https://github.com/simonw/datasette/issues/359", "label": "#359"}, {"href": "https://github.com/simonw/datasette/pull/445", "label": "#445"}]
changelog:v0-28-medium-changes changelog v0-28-medium-changes Medium changes Datasette now conforms to the Black coding style ( #449 ) - and has a unit test to enforce this in the future New Special table arguments : ?columnname__in=value1,value2,value3 filter for executing SQL IN queries against a table, see Table arguments ( #433 ) ?columnname__date=yyyy-mm-dd filter which returns rows where the spoecified datetime column falls on the specified date ( 583b22a ) ?tags__arraycontains=tag filter which acts against a JSON array contained in a column ( 78e45ea ) ?_where=sql-fragment filter for the table view ( #429 ) ?_fts_table=mytable and ?_fts_pk=mycolumn query string options can be used to specify which FTS table to use for a search query - see Configuring full-text search for a table or view ( #428 ) You can now pass the same table filter multiple times - for example, ?content__not=world&content__not=hello will return all rows where the content column is neither hello or world ( #288 ) … ["Changelog", "0.28 (2019-05-19)"] [{"href": "https://github.com/python/black", "label": "Black coding style"}, {"href": "https://github.com/simonw/datasette/pull/449", "label": "#449"}, {"href": "https://github.com/simonw/datasette/issues/433", "label": "#433"}, {"href": "https://github.com/simonw/datasette/commit/583b22aa28e26c318de0189312350ab2688c90b1", "label": "583b22a"}, {"href": "https://github.com/simonw/datasette/commit/78e45ead4d771007c57b307edf8fc920101f8733", "label": "78e45ea"}, {"href": "https://github.com/simonw/datasette/issues/429", "label": "#429"}, {"href": "https://github.com/simonw/datasette/issues/428", "label": "#428"}, {"href": "https://github.com/simonw/datasette/issues/288", "label": "#288"}, {"href": "https://github.com/simonw/datasette/issues/435", "label": "#435"}, {"href": "https://github.com/simonw/datasette/issues/462", "label": "#462"}, {"href": "https://github.com/simonw/datasette/issues/465", "label": "#465"}]
changelog:v0-28-publish-cloudrun changelog v0-28-publish-cloudrun datasette publish cloudrun Google Cloud Run is a brand new serverless hosting platform from Google, which allows you to build a Docker container which will run only when HTTP traffic is received and will shut down (and hence cost you nothing) the rest of the time. It's similar to Zeit's Now v1 Docker hosting platform which sadly is no longer accepting signups from new users. The new datasette publish cloudrun command was contributed by Romain Primet ( #434 ) and publishes selected databases to a new Datasette instance running on Google Cloud Run. See Publishing to Google Cloud Run for full documentation. ["Changelog", "0.28 (2019-05-19)"] [{"href": "https://cloud.google.com/run/", "label": "Google Cloud Run"}, {"href": "https://hyperion.alpha.spectrum.chat/zeit/now/cannot-create-now-v1-deployments~d206a0d4-5835-4af5-bb5c-a17f0171fb25?m=MTU0Njk2NzgwODM3OA==", "label": "no longer accepting signups"}, {"href": "https://github.com/simonw/datasette/pull/434", "label": "#434"}]
changelog:v0-28-register-output-renderer changelog v0-28-register-output-renderer register_output_renderer plugins Russ Garrett implemented a new Datasette plugin hook called register_output_renderer ( #441 ) which allows plugins to create additional output renderers in addition to Datasette's default .json and .csv . Russ's in-development datasette-geo plugin includes an example of this hook being used to output .geojson automatically converted from SpatiaLite. ["Changelog", "0.28 (2019-05-19)"] [{"href": "https://github.com/simonw/datasette/pull/441", "label": "#441"}, {"href": "https://github.com/russss/datasette-geo", "label": "datasette-geo"}, {"href": "https://github.com/russss/datasette-geo/blob/d4cecc020848bbde91e9e17bf352f7c70bc3dccf/datasette_plugin_geo/geojson.py", "label": "an example"}]
changelog:v0-29-medium-changes changelog v0-29-medium-changes Easier custom templates for table rows If you want to customize the display of individual table rows, you can do so using a _table.html template include that looks something like this: {% for row in display_rows %} <div> <h2>{{ row["title"] }}</h2> <p>{{ row["description"] }}<lp> <p>Category: {{ row.display("category_id") }}</p> </div> {% endfor %} This is a backwards incompatible change . If you previously had a custom template called _rows_and_columns.html you need to rename it to _table.html . See Custom templates for full details. ["Changelog", "0.29 (2019-07-07)"] []
changelog:writable-canned-queries changelog writable-canned-queries Writable canned queries Datasette's Canned queries feature lets you define SQL queries in metadata.json which can then be executed by users visiting a specific URL. https://latest.datasette.io/fixtures/neighborhood_search for example. Canned queries were previously restricted to SELECT , but Datasette 0.44 introduces the ability for canned queries to execute INSERT or UPDATE queries as well, using the new "write": true property ( #800 ): { "databases": { "dogs": { "queries": { "add_name": { "sql": "INSERT INTO names (name) VALUES (:name)", "write": true } } } } } See Writable canned queries for more details. ["Changelog", "0.44 (2020-06-11)"] [{"href": "https://latest.datasette.io/fixtures/neighborhood_search", "label": "https://latest.datasette.io/fixtures/neighborhood_search"}, {"href": "https://github.com/simonw/datasette/issues/800", "label": "#800"}]
contributing:contributing-alpha-beta contributing contributing-alpha-beta Alpha and beta releases Alpha and beta releases are published to preview upcoming features that may not yet be stable - in particular to preview new plugin hooks. You are welcome to try these out, but please be aware that details may change before the final release. Please join discussions on the issue tracker to share your thoughts and experiences with on alpha and beta features that you try out. ["Contributing"] [{"href": "https://github.com/simonw/datasette/issues", "label": "discussions on the issue tracker"}]
contributing:contributing-bug-fix-branch contributing contributing-bug-fix-branch Releasing bug fixes from a branch If it's necessary to publish a bug fix release without shipping new features that have landed on main a release branch can be used. Create it from the relevant last tagged release like so: git branch 0.52.x 0.52.4 git checkout 0.52.x Next cherry-pick the commits containing the bug fixes: git cherry-pick COMMIT Write the release notes in the branch, and update the version number in version.py . Then push the branch: git push -u origin 0.52.x Once the tests have completed, publish the release from that branch target using the GitHub Draft a new release form. Finally, cherry-pick the commit with the release notes and version number bump across to main : git checkout main git cherry-pick COMMIT git push ["Contributing"] [{"href": "https://github.com/simonw/datasette/releases/new", "label": "Draft a new release"}]
contributing:contributing-debugging contributing contributing-debugging Debugging Any errors that occur while Datasette is running while display a stack trace on the console. You can tell Datasette to open an interactive pdb debugger session if an error occurs using the --pdb option: datasette --pdb fixtures.db ["Contributing"] []
contributing:contributing-documentation contributing contributing-documentation Editing and building the documentation Datasette's documentation lives in the docs/ directory and is deployed automatically using Read The Docs . The documentation is written using reStructuredText. You may find this article on The subset of reStructuredText worth committing to memory useful. You can build it locally by installing sphinx and sphinx_rtd_theme in your Datasette development environment and then running make html directly in the docs/ directory: # You may first need to activate your virtual environment: source venv/bin/activate # Install the dependencies needed to build the docs pip install -e .[docs] # Now build the docs cd docs/ make html This will create the HTML version of the documentation in docs/_build/html . You can open it in your browser like so: open _build/html/index.html Any time you make changes to a .rst file you can re-run make html to update the built documents, then refresh them in your browser. For added productivity, you can use use sphinx-autobuild to run Sphinx in auto-build mode. This will run a local webserver serving the docs that automatically rebuilds them and refreshes the page any time you hit save in your editor. sphinx-autobuild will have been installed when you ran pip install -e .[docs] . In your docs/ directory you can start the server by running the following: make livehtml Now browse to http://localhost:8000/ to view the documentation. Any edits you make should be instantly reflected in your browser. ["Contributing"] [{"href": "https://readthedocs.org/", "label": "Read The Docs"}, {"href": "https://simonwillison.net/2018/Aug/25/restructuredtext/", "label": "The subset of reStructuredText worth committing to memory"}, {"href": "https://pypi.org/project/sphinx-autobuild/", "label": "sphinx-autobuild"}]
contributing:contributing-formatting contributing contributing-formatting Code formatting Datasette uses opinionated code formatters: Black for Python and Prettier for JavaScript. These formatters are enforced by Datasette's continuous integration: if a commit includes Python or JavaScript code that does not match the style enforced by those tools, the tests will fail. When developing locally, you can verify and correct the formatting of your code using these tools. ["Contributing"] [{"href": "https://github.com/psf/black", "label": "Black"}, {"href": "https://prettier.io/", "label": "Prettier"}]
contributing:contributing-formatting-black contributing contributing-formatting-black Running Black Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: $ black . --check All done! ✨ 🍰 ✨ 95 files would be left unchanged. If any of your code does not conform to Black you can run this to automatically fix those problems: $ black . reformatted ../datasette/setup.py All done! ✨ 🍰 ✨ 1 file reformatted, 94 files left unchanged. ["Contributing", "Code formatting"] []
contributing:contributing-formatting-prettier contributing contributing-formatting-prettier Prettier To install Prettier, install Node.js and then run the following in the root of your datasette repository checkout: $ npm install This will install Prettier in a node_modules directory. You can then check that your code matches the coding style like so: $ npm run prettier -- --check > prettier > prettier 'datasette/static/*[!.min].js' "--check" Checking formatting... [warn] datasette/static/plugins.js [warn] Code style issues found in the above file(s). Forgot to run Prettier? You can fix any problems by running: $ npm run fix ["Contributing", "Code formatting"] [{"href": "https://nodejs.org/en/download/package-manager/", "label": "install Node.js"}]
contributing:contributing-release contributing contributing-release Release process Datasette releases are performed using tags. When a new release is published on GitHub, a GitHub Action workflow will perform the following: Run the unit tests against all supported Python versions. If the tests pass... Build a Docker image of the release and push a tag to https://hub.docker.com/r/datasetteproject/datasette Re-point the "latest" tag on Docker Hub to the new image Build a wheel bundle of the underlying Python source code Push that new wheel up to PyPI: https://pypi.org/project/datasette/ To deploy new releases you will need to have push access to the main Datasette GitHub repository. Datasette follows Semantic Versioning : major.minor.patch We increment major for backwards-incompatible releases. Datasette is currently pre-1.0 so the major version is always 0 . We increment minor for new features. We increment patch for bugfix releass. Alpha and beta releases may have an additional a0 or b0 prefix - the integer component will be incremented with each subsequent alpha or beta. To release a new version, first create a commit that updates the version number in datasette/version.py and the the changelog with highlights of the new version. An example commit can be seen here : # Update changelog git commit -m " Release 0.51a1 Refs #1056, #1039, #998, #1045, #1033, #1036, #1034, #976, #1057, #1058, #1053, #1064, #1066" -a git push Referencing the issues that are part of the release in the commit message ensures the name of the release shows up on those issue pages, e.g. here . You can generate the list of issue referen… ["Contributing"] [{"href": "https://github.com/simonw/datasette/blob/main/.github/workflows/deploy-latest.yml", "label": "GitHub Action workflow"}, {"href": "https://hub.docker.com/r/datasetteproject/datasette", "label": "https://hub.docker.com/r/datasetteproject/datasette"}, {"href": "https://pypi.org/project/datasette/", "label": "https://pypi.org/project/datasette/"}, {"href": "https://semver.org/", "label": "Semantic Versioning"}, {"href": "https://github.com/simonw/datasette/commit/0e1e89c6ba3d0fbdb0823272952cf356f3016def", "label": "commit can be seen here"}, {"href": "https://github.com/simonw/datasette/issues/581#ref-commit-d56f402", "label": "here"}, {"href": "https://observablehq.com/@simonw/extract-issue-numbers-from-pasted-text", "label": "Extract issue numbers from pasted text"}, {"href": "https://github.com/simonw/datasette/releases/new", "label": "a new release"}, {"href": "https://euangoddard.github.io/clipboard2markdown/", "label": "Paste to Markdown tool"}, {"href": "https://datasette.io/", "label": "datasette.io"}, {"href": "https://github.com/simonw/datasette.io/blob/main/news.yaml", "label": "news.yaml"}]
contributing:contributing-upgrading-codemirror contributing contributing-upgrading-codemirror Upgrading CodeMirror Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror: Download and extract latest CodeMirror zip file from https://codemirror.net/codemirror.zip Rename lib/codemirror.js to codemirror-5.57.0.js (using latest version number) Rename lib/codemirror.css to codemirror-5.57.0.css Rename mode/sql/sql.js to codemirror-5.57.0-sql.js Edit both JavaScript files to make the top license comment a /* */ block instead of multiple // lines Minify the JavaScript files like this: npx uglify-js codemirror-5.57.0.js -o codemirror-5.57.0.min.js --comments '/LICENSE/' npx uglify-js codemirror-5.57.0-sql.js -o codemirror-5.57.0-sql.min.js --comments '/LICENSE/' Check that the LICENSE comment did indeed survive minification Minify the CSS file like this: npx clean-css-cli codemirror-5.57.0.css -o codemirror-5.57.0.min.css Edit the _codemirror.html template to reference the new files git rm the old files, git add the new files ["Contributing"] [{"href": "https://codemirror.net/", "label": "CodeMirror"}, {"href": "https://latest.datasette.io/fixtures", "label": "this page"}, {"href": "https://codemirror.net/codemirror.zip", "label": "https://codemirror.net/codemirror.zip"}]
contributing:devenvironment contributing devenvironment Setting up a development environment If you have Python 3.6 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps. If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account. Now clone that repository somewhere on your computer: git clone git@github.com:YOURNAME/datasette If you want to get started without creating your own fork, you can do this instead: git clone git@github.com:simonw/datasette The next step is to create a virtual environment for your project and use it to install Datasette's dependencies: cd datasette # Create a virtual environment in ./venv python3 -m venv ./venv # Now activate the virtual environment, so pip can install into it source venv/bin/activate # Install Datasette and its testing dependencies python3 -m pip install -e .[test] That last line does most of the work: pip install -e means "install this package in a way that allows me to edit the source code in place". The .[test] option means "use the setup.py in this directory and install the optional testing dependencies as well". Once you have done this, you can run the Datasette unit tests from inside your datasette/ directory using pytest like so: pytest To run Datasette itself, type datasette . You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. You can create a copy of that database by running this command: python tests/fixtures.py fixtures.db Now you can run Datasette against the new fixtures database like so: datasette fixtures.db This will start a server at http://127.0.0.1:8001/ . Any changes you make in the datasette/templates or datasette/st… ["Contributing"] [{"href": "https://docs.python-guide.org/starting/install3/osx/", "label": "is using homebrew"}, {"href": "https://github.com/simonw/datasette/fork", "label": "create a fork of datasette"}, {"href": "https://docs.pytest.org/", "label": "pytest"}]
contributing:general-guidelines contributing general-guidelines General guidelines master should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them. ["Contributing"] []
contributing:id1 contributing id1 Contributing Datasette is an open source project. We welcome contributions! This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins . [] []
csv_export:a-note-on-urls csv_export a-note-on-urls A note on URLs The default URL for the CSV representation of a table is that table with .csv appended to it: https://latest.datasette.io/fixtures/facetable - HTML interface https://latest.datasette.io/fixtures/facetable.csv - CSV export https://latest.datasette.io/fixtures/facetable.json - JSON API This pattern doesn't work for tables with names that already end in .csv or .json . For those tables, you can instead use the _format= query string parameter: https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv - HTML interface https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=csv - CSV export https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=json - JSON API ["CSV export"] [{"href": "https://latest.datasette.io/fixtures/facetable", "label": "https://latest.datasette.io/fixtures/facetable"}, {"href": "https://latest.datasette.io/fixtures/facetable.csv", "label": "https://latest.datasette.io/fixtures/facetable.csv"}, {"href": "https://latest.datasette.io/fixtures/facetable.json", "label": "https://latest.datasette.io/fixtures/facetable.json"}, {"href": "https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv", "label": "https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv"}, {"href": "https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=csv", "label": "https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=csv"}, {"href": "https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=json", "label": "https://latest.datasette.io/fixtures/table%2Fwith%2Fslashes.csv?_format=json"}]
csv_export:csv-export-url-parameters csv_export csv-export-url-parameters URL parameters The following options can be used to customize the CSVs returned by Datasette. ?_header=off This removes the first row of the CSV file specifying the headings - only the row data will be returned. ?_stream=on Stream all matching records, not just the first page of results. See below. ?_dl=on Causes Datasette to return a content-disposition: attachment; filename="filename.csv" header. ["CSV export"] []
csv_export:id1 csv_export id1 CSV export Any Datasette table, view or custom SQL query can be exported as CSV. To obtain the CSV representation of the table you are looking, click the "this data as CSV" link. You can also use the advanced export form for more control over the resulting file, which looks like this and has the following options: download file - instead of displaying CSV in your browser, this forces your browser to download the CSV to your downloads directory. expand labels - if your table has any foreign key references this option will cause the CSV to gain additional COLUMN_NAME_label columns with a label for each foreign key derived from the linked table. In this example the city_id column is accompanied by a city_id_label column. stream all rows - by default CSV files only contain the first max_returned_rows records. This option will cause Datasette to loop through every matching record and return them as a single CSV file. You can try that out on https://latest.datasette.io/fixtures/facetable?_size=4 [] [{"href": "https://latest.datasette.io/fixtures/facetable.csv?_labels=on&_size=max", "label": "In this example"}, {"href": "https://latest.datasette.io/fixtures/facetable?_size=4", "label": "https://latest.datasette.io/fixtures/facetable?_size=4"}]
csv_export:streaming-all-records csv_export streaming-all-records Streaming all records The stream all rows option is designed to be as efficient as possible - under the hood it takes advantage of Python 3 asyncio capabilities and Datasette's efficient pagination to stream back the full CSV file. Since databases can get pretty large, by default this option is capped at 100MB - if a table returns more than 100MB of data the last line of the CSV will be a truncation error message. You can increase or remove this limit using the max_csv_mb config setting. You can also disable the CSV export feature entirely using allow_csv_stream . ["CSV export"] []
custom_templates:css-classes-on-the-body custom_templates css-classes-on-the-body CSS classes on the <body> Every default template includes CSS classes in the body designed to support custom styling. The index template (the top level page at / ) gets this: <body class="index"> The database template ( /dbname ) gets this: <body class="db db-dbname"> The custom SQL template ( /dbname?sql=... ) gets this: <body class="query db-dbname"> A canned query template ( /dbname/queryname ) gets this: <body class="query db-dbname query-queryname"> The table template ( /dbname/tablename ) gets: <body class="table db-dbname table-tablename"> The row template ( /dbname/tablename/rowid ) gets: <body class="row db-dbname table-tablename"> The db-x and table-x classes use the database or table names themselves if they are valid CSS identifiers. If they aren't, we strip any invalid characters out and append a 6 character md5 digest of the original name, in order to ensure that multiple tables which resolve to the same stripped character version still have different CSS classes. Some examples: "simple" => "simple" "MixedCase" => "MixedCase" "-no-leading-hyphens" => "no-leading-hyphens-65bea6" "_no-leading-underscores" => "no-leading-underscores-b921bc" "no spaces" => "no-spaces-7088d7" "-" => "336d5e" "no $ characters" => "no--characters-59e024" <td> and <th> elements also get custom CSS classes reflecting the database column they are representing, for example: <table> <thead> <tr> <th class="col-id" scope="col">id</th> <th class="col-name" scope="col">name</th> </tr> </thead> <tbody> <tr> <td class="col-id"><a href="...">1</a></td> … ["Custom pages and templates", "Custom CSS and JavaScript"] []
custom_templates:custom-pages-404 custom_templates custom-pages-404 Returning 404s To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function: {% if not rows %} {{ raise_404("Content not found") }} {% endif %} If you call raise_404() the other content in your template will be ignored. ["Custom pages and templates", "Custom pages"] []
custom_templates:custom-pages-errors custom_templates custom-pages-errors Custom error pages Datasette returns an error page if an unexpected error occurs, access is forbidden or content cannot be found. You can customize the response returned for these errors by providing a custom error page template. Content not found errors use a 404.html template. Access denied errors use 403.html . Invalid input errors use 400.html . Unexpected errors of other kinds use 500.html . If a template for the specific error code is not found a template called error.html will be used instead. If you do not provide that template Datasette's default error.html template will be used. The error template will be passed the following context: status - integer The integer HTTP status code, e.g. 404, 500, 403, 400. error - string Details of the specific error, usually a full sentence. title - string or None A title for the page representing the class of error. This is often None for errors that do not provide a title separate from their error message. ["Custom pages and templates"] [{"href": "https://github.com/simonw/datasette/blob/main/datasette/templates/error.html", "label": "default error.html template"}]
custom_templates:custom-pages-headers custom_templates custom-pages-headers Custom headers and status codes Custom pages default to being served with a content-type of text/html; charset=utf-8 and a 200 status code. You can change these by calling a custom function from within your template. For example, to serve a custom page with a 418 I'm a teapot HTTP status code, create a file in pages/teapot.html containing the following: {{ custom_status(418) }} <html> <head><title>Teapot</title></head> <body> I'm a teapot </body> </html> To serve a custom HTTP header, add a custom_header(name, value) function call. For example: {{ custom_status(418) }} {{ custom_header("x-teapot", "I am") }} <html> <head><title>Teapot</title></head> <body> I'm a teapot </body> </html> You can verify this is working using curl like this: $ curl -I 'http://127.0.0.1:8001/teapot' HTTP/1.1 418 date: Sun, 26 Apr 2020 18:38:30 GMT server: uvicorn x-teapot: I am content-type: text/html; charset=utf-8 ["Custom pages and templates", "Custom pages"] []
custom_templates:custom-pages-parameters custom_templates custom-pages-parameters Path parameters for pages You can define custom pages that match multiple paths by creating files with {variable} definitions in their filenames. For example, to capture any request to a URL matching /about/* , you would create a template in the following location: templates/pages/about/{slug}.html A hit to /about/news would render that template and pass in a variable called slug with a value of "news" . If you use this mechanism don't forget to return a 404 if the referenced content could not be found. You can do this using {{ raise_404() }} described below. Templates defined using custom page routes work particularly well with the sql() template function from datasette-template-sql or the graphql() template function from datasette-graphql . ["Custom pages and templates", "Custom pages"] [{"href": "https://github.com/simonw/datasette-template-sql", "label": "datasette-template-sql"}, {"href": "https://github.com/simonw/datasette-graphql#the-graphql-template-function", "label": "datasette-graphql"}]
custom_templates:custom-pages-redirects custom_templates custom-pages-redirects Custom redirects You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html : {{ custom_redirect("https://github.com/simonw/datasette") }} Now requests to http://localhost:8001/datasette will result in a redirect. These redirects are served with a 301 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function: {{ custom_redirect("https://github.com/simonw/datasette", 301) }} ["Custom pages and templates", "Custom pages"] []
custom_templates:customization custom_templates customization Custom pages and templates Datasette provides a number of ways of customizing the way data is displayed. [] []
custom_templates:customization-css-and-javascript custom_templates customization-css-and-javascript Custom CSS and JavaScript When you launch Datasette, you can specify a custom metadata file like this: datasette mydb.db --metadata metadata.json Your metadata.json file can include links that look like this: { "extra_css_urls": [ "https://simonwillison.net/static/css/all.bf8cd891642c.css" ], "extra_js_urls": [ "https://code.jquery.com/jquery-3.2.1.slim.min.js" ] } The extra CSS and JavaScript files will be linked in the <head> of every page: <link rel="stylesheet" href="https://simonwillison.net/static/css/all.bf8cd891642c.css"> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js"></script> You can also specify a SRI (subresource integrity hash) for these assets: { "extra_css_urls": [ { "url": "https://simonwillison.net/static/css/all.bf8cd891642c.css", "sri": "sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI" } ], "extra_js_urls": [ { "url": "https://code.jquery.com/jquery-3.2.1.slim.min.js", "sri": "sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=" } ] } This will produce: <link rel="stylesheet" href="https://simonwillison.net/static/css/all.bf8cd891642c.css" integrity="sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI" crossorigin="anonymous"> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=" crossorigin="anonymous"></script> Modern browsers will only execute the stylesheet or JavaScript if the SRI hash matches the content served. You can generate hashes using www.srihash.org Items in "extra_js_urls" can specify "module": true if they reference JavaScript that uses JavaScript modules . This configuration: { "extra_js_urls": [ { "url": "https://exa… ["Custom pages and templates"] [{"href": "https://www.srihash.org/", "label": "www.srihash.org"}, {"href": "https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules", "label": "JavaScript modules"}]
custom_templates:customization-custom-templates custom_templates customization-custom-templates Custom templates By default, Datasette uses default templates that ship with the package. You can over-ride these templates by specifying a custom --template-dir like this: datasette mydb.db --template-dir=mytemplates/ Datasette will now first look for templates in that directory, and fall back on the defaults if no matches are found. It is also possible to over-ride templates on a per-database, per-row or per- table basis. The lookup rules Datasette uses are as follows: Index page (/): index.html Database page (/mydatabase): database-mydatabase.html database.html Custom query page (/mydatabase?sql=...): query-mydatabase.html query.html Canned query page (/mydatabase/canned-query): query-mydatabase-canned-query.html query-mydatabase.html query.html Table page (/mydatabase/mytable): table-mydatabase-mytable.html table.html Row page (/mydatabase/mytable/id): row-mydatabase-mytable.html row.html Table of rows and columns include on table page: _table-table-mydatabase-mytable.html _table-mydatabase-mytable.html _table.html Table of rows and columns include on row page: _table-row-mydatabase-mytable.html _table-mydatabase-mytable.html _table.html If a table name has spaces or other unexpected characters in it, the template filename will follow the same rules as our custom <body> CSS classes - for example, a table called "Food Trucks" will attempt to load the following templates: table-mydatabase-Food-Trucks-399138.html table.html You can find out which templates were considered for a specific page by viewing source on that page and looking for an HTML comment at the bottom. The comment will look something like this: <!-- Templates considered: *query-mydb-tz.html, query-mydb.html, query.html… ["Custom pages and templates"] [{"href": "https://github.com/simonw/datasette/blob/master/datasette/templates/_table.html", "label": "can be seen here"}]
custom_templates:customization-static-files custom_templates customization-static-files Serving static files Datasette can serve static files for you, using the --static option. Consider the following directory structure: metadata.json static/styles.css static/app.js You can start Datasette using --static static:static/ to serve those files from the /static/ mount point: $ datasette -m metadata.json --static static:static/ --memory The following URLs will now serve the content from those CSS and JS files: http://localhost:8001/static/styles.css http://localhost:8001/static/app.js You can reference those files from metadata.json like so: { "extra_css_urls": [ "/static/styles.css" ], "extra_js_urls": [ "/static/app.js" ] } ["Custom pages and templates", "Custom CSS and JavaScript"] []
custom_templates:id1 custom_templates id1 Custom pages You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory. For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this: $ datasette mydb.db --template-dir=templates/ You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html . ["Custom pages and templates"] []
custom_templates:publishing-static-assets custom_templates publishing-static-assets Publishing static assets The datasette publish command can be used to publish your static assets, using the same syntax as above: $ datasette publish cloudrun mydb.db --static static:static/ This will upload the contents of the static/ directory as part of the deployment, and configure Datasette to correctly serve the assets. ["Custom pages and templates", "Custom CSS and JavaScript"] []
deploying:apache-proxy-configuration deploying apache-proxy-configuration Apache proxy configuration For Apache , you can use the ProxyPass directive. First make sure the following lines are uncommented: LoadModule proxy_module lib/httpd/modules/mod_proxy.so LoadModule proxy_http_module lib/httpd/modules/mod_proxy_http.so Then add this directive to proxy traffic: ProxyPass /datasette-prefix/ http://127.0.0.1:8009/datasette-prefix/ ["Deploying Datasette", "Running Datasette behind a proxy"] [{"href": "https://httpd.apache.org/", "label": "Apache"}]
deploying:deploying deploying deploying Deploying Datasette The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. You can deploy Datasette to other hosting providers using the instructions on this page. [] []
deploying:deploying-buildpacks deploying deploying-buildpacks Deploying using buildpacks Some hosting providers such as Heroku , DigitalOcean App Platform and Scalingo support the Buildpacks standard for deploying Python web applications. Deploying Datasette on these platforms requires two files: requirements.txt and Procfile . The requirements.txt file lets the platform know which Python packages should be installed. It should contain datasette at a minimum, but can also list any Datasette plugins you wish to install - for example: datasette datasette-vega The Procfile lets the hosting platform know how to run the command that serves web traffic. It should look like this: web: datasette . -h 0.0.0.0 -p $PORT --cors The $PORT environment variable is provided by the hosting platform. --cors enables CORS requests from JavaScript running on other websites to your domain - omit this if you don't want to allow CORS. You can add additional Datasette Settings options here too. These two files should be enough to deploy Datasette on any host that supports buildpacks. Datasette will serve any SQLite files that are included in the root directory of the application. If you want to build SQLite files or download them as part of the deployment process you can do so using a bin/post_compile file. For example, the following bin/post_compile will download an example database that will then be served by Datasette: wget https://fivethirtyeight.datasettes.com/fivethirtyeight.db simonw/buildpack-datasette-demo is an example GitHub repository showing a Datasette configuration that can be deployed to a buildpack-supporting host. ["Deploying Datasette"] [{"href": "https://www.heroku.com/", "label": "Heroku"}, {"href": "https://www.digitalocean.com/docs/app-platform/", "label": "DigitalOcean App Platform"}, {"href": "https://scalingo.com/", "label": "Scalingo"}, {"href": "https://buildpacks.io/", "label": "Buildpacks standard"}, {"href": "https://github.com/simonw/buildpack-datasette-demo", "label": "simonw/buildpack-datasette-demo"}]
deploying:deploying-fundamentals deploying deploying-fundamentals Deployment fundamentals Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000. If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette. ["Deploying Datasette"] []
deploying:deploying-proxy deploying deploying-proxy Running Datasette behind a proxy You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: datasette my-database.db --setting base_url /my-datasette/ -p 8009 This will run Datasette with the following URLs: http://127.0.0.1:8009/my-datasette/ - the Datasette homepage http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance. ["Deploying Datasette"] []
deploying:deploying-systemd deploying deploying-systemd Running Datasette using systemd You can run Datasette on Ubuntu or Debian systems using systemd . First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . You can copy a test database into that folder like so: cd /home/ubuntu/datasette-root curl -O https://latest.datasette.io/fixtures.db Create a file at /etc/systemd/system/datasette.service with the following contents: [Unit] Description=Datasette After=network.target [Service] Type=simple User=ubuntu Environment=DATASETTE_SECRET= WorkingDirectory=/home/ubuntu/datasette-root ExecStart=datasette serve . -h 127.0.0.1 -p 8000 Restart=on-failure [Install] WantedBy=multi-user.target Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: $ python3 -c 'import secrets; print(secrets.token_hex(32))' This configuration will run Datasette against all database files contained in the /home/ubunt/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. You can start the Datasette process running using the following: sudo systemctl daemon-reload sudo systemctl start datasette.service You can confirm that Datasette is running on port 8000 like so: curl 127.0.01:8000/-/versions.json # Should output JSON showing the installed version Datasette will not be accessible from outside the server because it is liste… ["Deploying Datasette"] [{"href": "https://ubuntu.com/tutorials/install-and-configure-nginx#1-overview", "label": "a tutorial on installing nginx"}]
deploying:nginx-proxy-configuration deploying nginx-proxy-configuration Nginx proxy configuration Here is an example of an nginx configuration file that will proxy traffic to Datasette: daemon off; events { worker_connections 1024; } http { server { listen 80; location /my-datasette { proxy_pass http://127.0.0.1:8009/my-datasette; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } } ["Deploying Datasette", "Running Datasette behind a proxy"] [{"href": "https://nginx.org/", "label": "nginx"}]
ecosystem:dogsheep ecosystem dogsheep Dogsheep Dogsheep is a collection of tools for personal analytics using SQLite and Datasette. The project provides tools like github-to-sqlite and twitter-to-sqlite that can import data from different sources in order to create a personal data warehouse. Personal Data Warehouses: Reclaiming Your Data is a talk that explains Dogsheep and demonstrates it in action. ["The Datasette Ecosystem"] [{"href": "https://dogsheep.github.io/", "label": "Dogsheep"}, {"href": "https://datasette.io/tools/github-to-sqlite", "label": "github-to-sqlite"}, {"href": "https://datasette.io/tools/twitter-to-sqlite", "label": "twitter-to-sqlite"}, {"href": "https://simonwillison.net/2020/Nov/14/personal-data-warehouses/", "label": "Personal Data Warehouses: Reclaiming Your Data"}]
ecosystem:ecosystem ecosystem ecosystem The Datasette Ecosystem Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data. These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality. The Datasette project website includes a directory of plugins and a directory of tools: Plugins directory on datasette.io Tools directory on datasette.io [] [{"href": "https://datasette.io/", "label": "Datasette project website"}, {"href": "https://datasette.io/plugins", "label": "Plugins directory on datasette.io"}, {"href": "https://datasette.io/tools", "label": "Tools directory on datasette.io"}]
ecosystem:sqlite-utils ecosystem sqlite-utils sqlite-utils sqlite-utils is a key building block for the wider Datasette ecosystem. It provides a collection of utilities for manipulating SQLite databases, both as a Python library and a command-line utility. Features include: Insert data into a SQLite database from JSON, CSV or TSV, automatically creating tables with the correct schema or altering existing tables to add missing columns. Configure tables for use with SQLite full-text search, including creating triggers needed to keep the search index up-to-date. Modify tables in ways that are not supported by SQLite's default ALTER TABLE syntax - for example changing the types of columns or selecting a new primary key for a table. Adding foreign keys to existing database tables. Extracting columns of data into a separate lookup table. ["The Datasette Ecosystem"] [{"href": "https://sqlite-utils.datasette.io/", "label": "sqlite-utils"}]
facets:facets-in-metadata-json facets facets-in-metadata-json Facets in metadata.json You can turn facets on by default for specific tables by adding them to a "facets" key in a Datasette Metadata file. Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: { "databases": { "sf-trees": { "tables": { "Street_Tree_List": { "facets": ["qLegalStatus"] } } } } } Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. ["Facets"] []
facets:facets-in-query-strings facets facets-in-query-strings Facets in query strings To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL. For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this: /dbname/tablename?_facet=state&_facet=city_id This works for both the HTML interface and the .json view. When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this: { "state": { "name": "state", "results": [ { "value": "CA", "label": "CA", "count": 10, "toggle_url": "http://...?_facet=city_id&_facet=state&state=CA", "selected": false }, { "value": "MI", "label": "MI", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&state=MI", "selected": false }, { "value": "MC", "label": "MC", "count": 1, "toggle_url": "http://...?_facet=city_id&_facet=state&state=MC", "selected": false } ], "truncated": false } "city_id": { "name": "city_id", "results": [ { "value": 1, "label": "San Francisco", "count": 6, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=1", "selected": false }, { "value": 2, "label": "Los Angeles", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=2", "selected": false }, { "value": 3, "label": "Detroit", "count": 4, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=3", "selected": false }, { "value": 4, "label": "Memnonia", "count": 1, "toggle_url": "http://...?_facet=city_id&_facet=state&city_id=4", "selected": false } ], "truncated": false } } If Datasette detects that a column is a foreign key… ["Facets"] []
facets:id1 facets id1 Facets Datasette facets can be used to add a faceted browse interface to any database table. With facets, tables are displayed along with a summary showing the most common values in specified columns. These values can be selected to further filter the table. Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table. [] []
facets:id2 facets id2 Facet by JSON array If your SQLite installation provides the json1 extension (you can check using /-/versions ) Datasette will automatically detect columns that contain JSON arrays of values and offer a faceting interface against those columns. This is useful for modelling things like tags without needing to break them out into a new table. Example here: latest.datasette.io/fixtures/facetable?_facet_array=tags ["Facets"] [{"href": "https://latest.datasette.io/fixtures/facetable?_facet_array=tags", "label": "latest.datasette.io/fixtures/facetable?_facet_array=tags"}]
facets:id3 facets id3 Facet by date If Datasette finds any columns that contain dates in the first 100 values, it will offer a faceting interface against the dates of those values. This works especially well against timestamp values such as 2019-03-01 12:44:00 . Example here: latest.datasette.io/fixtures/facetable?_facet_date=created ["Facets"] [{"href": "https://latest.datasette.io/fixtures/facetable?_facet_date=created", "label": "latest.datasette.io/fixtures/facetable?_facet_date=created"}]
facets:speeding-up-facets-with-indexes facets speeding-up-facets-with-indexes Speeding up facets with indexes The performance of facets can be greatly improved by adding indexes on the columns you wish to facet by. Adding indexes can be performed using the sqlite3 command-line utility. Here's how to add an index on the state column in a table called Food_Trucks : $ sqlite3 mydatabase.db SQLite version 3.19.3 2017-06-27 16:48:08 Enter ".help" for usage hints. sqlite> CREATE INDEX Food_Trucks_state ON Food_Trucks("state"); ["Facets"] []
facets:suggested-facets facets suggested-facets Suggested facets Datasette's table UI will suggest facets for the user to apply, based on the following criteria: For the currently filtered data are there any columns which, if applied as a facet... Will return 30 or less unique options Will return more than one unique option Will return less unique options than the total number of filtered rows And the query used to evaluate this criteria can be completed in under 50ms That last point is particularly important: Datasette runs a query for every column that is displayed on a page, which could get expensive - so to avoid slow load times it sets a time limit of just 50ms for each of those queries. This means suggested facets are unlikely to appear for tables with millions of records in them. ["Facets"] []
full_text_search:configuring-fts-by-hand full_text_search configuring-fts-by-hand Configuring FTS by hand We recommend using sqlite-utils , but if you want to hand-roll a SQLite full-text search table you can do so using the following SQL. To enable full-text search for a table called items that works against the name and description columns, you would run this SQL to create a new items_fts FTS virtual table: CREATE VIRTUAL TABLE "items_fts" USING FTS4 ( name, description, content="items" ); This creates a set of tables to power full-text search against items . The new items_fts table will be detected by Datasette as the fts_table for the items table. Creating the table is not enough: you also need to populate it with a copy of the data that you wish to make searchable. You can do that using the following SQL: INSERT INTO "items_fts" (rowid, name, description) SELECT rowid, name, description FROM items; If your table has columns that are foreign key references to other tables you can include that data in your full-text search index using a join. Imagine the items table has a foreign key column called category_id which refers to a categories table - you could create a full-text search table like this: CREATE VIRTUAL TABLE "items_fts" USING FTS4 ( name, description, category_name, content="items" ); And then populate it like this: INSERT INTO "items_fts" (rowid, name, description, category_name) SELECT items.rowid, items.name, items.description, categories.name FROM items JOIN categories ON items.category_id=categories.id; You can use this technique to populate the full-text search index from any combination of tables and joins that makes sense for your project. ["Full-text search", "Enabling full-text search for a SQLite table"] [{"href": "https://sqlite-utils.datasette.io/", "label": "sqlite-utils"}]

Next page

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [sections] (
   [id] TEXT PRIMARY KEY,
   [page] TEXT,
   [ref] TEXT,
   [title] TEXT,
   [content] TEXT,
   [breadcrumbs] TEXT,
   [references] TEXT
);