{"rowid": 1, "title": "CSV export", "content": "Any Datasette table, view or custom SQL query can be exported as CSV. \n To obtain the CSV representation of the table you are looking, click the \"this\n data as CSV\" link. \n You can also use the advanced export form for more control over the resulting\n file, which looks like this and has the following options: \n \n \n \n download file - instead of displaying CSV in your browser, this forces\n your browser to download the CSV to your downloads directory. \n \n \n expand labels - if your table has any foreign key references this option\n will cause the CSV to gain additional COLUMN_NAME_label columns with a\n label for each foreign key derived from the linked table. In this example \n the city_id column is accompanied by a city_id_label column. \n \n \n stream all rows - by default CSV files only contain the first\n max_returned_rows records. This option will cause Datasette to\n loop through every matching record and return them as a single CSV file. \n \n \n You can try that out on https://latest.datasette.io/fixtures/facetable?_size=4", "sections_fts": 61, "rank": null} {"rowid": 2, "title": "URL parameters", "content": "The following options can be used to customize the CSVs returned by Datasette. \n \n \n ?_header=off \n \n This removes the first row of the CSV file specifying the headings - only the row data will be returned. \n \n \n \n ?_stream=on \n \n Stream all matching records, not just the first page of results. See below. \n \n \n \n ?_dl=on \n \n Causes Datasette to return a content-disposition: attachment; filename=\"filename.csv\" header.", "sections_fts": 61, "rank": null} {"rowid": 3, "title": "Streaming all records", "content": "The stream all rows option is designed to be as efficient as possible -\n under the hood it takes advantage of Python 3 asyncio capabilities and\n Datasette's efficient pagination to stream back the full\n CSV file. \n Since databases can get pretty large, by default this option is capped at 100MB -\n if a table returns more than 100MB of data the last line of the CSV will be a\n truncation error message. \n You can increase or remove this limit using the max_csv_mb config\n setting. You can also disable the CSV export feature entirely using\n allow_csv_stream .", "sections_fts": 61, "rank": null} {"rowid": 4, "title": "Full-text search", "content": "SQLite includes a powerful mechanism for enabling full-text search against SQLite records. Datasette can detect if a table has had full-text search configured for it in the underlying database and display a search interface for filtering that table. \n Here's an example search : \n \n Datasette automatically detects which tables have been configured for full-text search.", "sections_fts": 61, "rank": null} {"rowid": 5, "title": "The table page and table view API", "content": "Table views that support full-text search can be queried using the ?_search=TERMS query string parameter. This will run the search against content from all of the columns that have been included in the index. \n Try this example: fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort \n SQLite full-text search supports wildcards. This means you can easily implement prefix auto-complete by including an asterisk at the end of the search term - for example: \n /dbname/tablename/?_search=rob* \n This will return all records containing at least one word that starts with the letters rob . \n You can also run searches against just the content of a specific named column by using _search_COLNAME=TERMS - for example, this would search for just rows where the name column in the FTS index mentions Sarah : \n /dbname/tablename/?_search_name=Sarah", "sections_fts": 61, "rank": null} {"rowid": 6, "title": "Advanced SQLite search queries", "content": "SQLite full-text search includes support for a variety of advanced queries , including AND , OR , NOT and NEAR . \n By default Datasette disables these features to ensure they do not cause errors or confusion for users who are not aware of them. You can disable this escaping and use the advanced queries by adding &_searchmode=raw to the table page query string. \n If you want to enable these operators by default for a specific table, you can do so by adding \"searchmode\": \"raw\" to the metadata configuration for that table, see Configuring full-text search for a table or view . \n If that option has been specified in the table metadata but you want to over-ride it and return to the default behavior you can append &_searchmode=escaped to the query string.", "sections_fts": 61, "rank": null} {"rowid": 7, "title": "Configuring full-text search for a table or view", "content": "If a table has a corresponding FTS table set up using the content= argument to CREATE VIRTUAL TABLE shown below, Datasette will detect it automatically and add a search interface to the table page for that table. \n You can also manually configure which table should be used for full-text search using query string parameters or Metadata . You can set the associated FTS table for a specific table and you can also set one for a view - if you do that, the page for that SQL view will offer a search option. \n Use ?_fts_table=x to over-ride the FTS table for a specific page. If the primary key was something other than rowid you can use ?_fts_pk=col to set that as well. This is particularly useful for views, for example: \n https://latest.datasette.io/fixtures/searchable_view?_fts_table=searchable_fts&_fts_pk=pk \n The fts_table metadata property can be used to specify an associated FTS table. If the primary key column in your table which was used to populate the FTS table is something other than rowid , you can specify the column to use with the fts_pk property. \n The \"searchmode\": \"raw\" property can be used to default the table to accepting SQLite advanced search operators, as described in Advanced SQLite search queries . \n Here is an example which enables full-text search (with SQLite advanced search operators) for a display_ads view which is defined against the ads table and hence needs to run FTS against the ads_fts table, using the id as the primary key: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"russian-ads\": {\n \"tables\": {\n \"display_ads\": {\n \"fts_table\": \"ads_fts\",\n \"fts_pk\": \"id\",\n \"searchmode\": \"raw\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "sections_fts": 61, "rank": null} {"rowid": 8, "title": "Searches using custom SQL", "content": "You can include full-text search results in custom SQL queries. The general pattern with SQLite search is to run the search as a sub-select that returns rowid values, then include those rowids in another part of the query. \n You can see the syntax for a basic search by running that search on a table page and then clicking \"View and edit SQL\" to see the underlying SQL. For example, consider this search for manafort is the US FARA database : \n /fara/FARA_All_ShortForms?_search=manafort \n If you click View and edit SQL you'll see that the underlying SQL looks like this: \n select\n rowid,\n Short_Form_Termination_Date,\n Short_Form_Date,\n Short_Form_Last_Name,\n Short_Form_First_Name,\n Registration_Number,\n Registration_Date,\n Registrant_Name,\n Address_1,\n Address_2,\n City,\n State,\n Zip\nfrom\n FARA_All_ShortForms\nwhere\n rowid in (\n select\n rowid\n from\n FARA_All_ShortForms_fts\n where\n FARA_All_ShortForms_fts match escape_fts(:search)\n )\norder by\n rowid\nlimit\n 101", "sections_fts": 61, "rank": null} {"rowid": 9, "title": "Enabling full-text search for a SQLite table", "content": "Datasette takes advantage of the external content mechanism in SQLite, which allows a full-text search virtual table to be associated with the contents of another SQLite table. \n To set up full-text search for a table, you need to do two things: \n \n \n Create a new FTS virtual table associated with your table \n \n \n Populate that FTS table with the data that you would like to be able to run searches against", "sections_fts": 61, "rank": null} {"rowid": 10, "title": "Configuring FTS using sqlite-utils", "content": "sqlite-utils is a CLI utility and Python library for manipulating SQLite databases. You can use it from Python code to configure FTS search, or you can achieve the same goal using the accompanying command-line tool . \n Here's how to use sqlite-utils to enable full-text search for an items table across the name and description columns: \n sqlite-utils enable-fts mydatabase.db items name description", "sections_fts": 61, "rank": null} {"rowid": 11, "title": "Configuring FTS using csvs-to-sqlite", "content": "If your data starts out in CSV files, you can use Datasette's companion tool csvs-to-sqlite to convert that file into a SQLite database and enable full-text search on specific columns. For a file called items.csv where you want full-text search to operate against the name and description columns you would run the following: \n csvs-to-sqlite items.csv items.db -f name -f description", "sections_fts": 61, "rank": null} {"rowid": 12, "title": "Configuring FTS by hand", "content": "We recommend using sqlite-utils , but if you want to hand-roll a SQLite full-text search table you can do so using the following SQL. \n To enable full-text search for a table called items that works against the name and description columns, you would run this SQL to create a new items_fts FTS virtual table: \n CREATE VIRTUAL TABLE \"items_fts\" USING FTS4 (\n name,\n description,\n content=\"items\"\n); \n This creates a set of tables to power full-text search against items . The new items_fts table will be detected by Datasette as the fts_table for the items table. \n Creating the table is not enough: you also need to populate it with a copy of the data that you wish to make searchable. You can do that using the following SQL: \n INSERT INTO \"items_fts\" (rowid, name, description)\n SELECT rowid, name, description FROM items; \n If your table has columns that are foreign key references to other tables you can include that data in your full-text search index using a join. Imagine the items table has a foreign key column called category_id which refers to a categories table - you could create a full-text search table like this: \n CREATE VIRTUAL TABLE \"items_fts\" USING FTS4 (\n name,\n description,\n category_name,\n content=\"items\"\n); \n And then populate it like this: \n INSERT INTO \"items_fts\" (rowid, name, description, category_name)\n SELECT items.rowid,\n items.name,\n items.description,\n categories.name\n FROM items JOIN categories ON items.category_id=categories.id; \n You can use this technique to populate the full-text search index from any combination of tables and joins that makes sense for your project.", "sections_fts": 61, "rank": null} {"rowid": 13, "title": "FTS versions", "content": "There are three different versions of the SQLite FTS module: FTS3, FTS4 and FTS5. You can tell which versions are supported by your instance of Datasette by checking the /-/versions page. \n FTS5 is the most advanced module but may not be available in the SQLite version that is bundled with your Python installation. Most importantly, FTS5 is the only version that has the ability to order by search relevance without needing extra code. \n If you can't be sure that FTS5 will be available, you should use FTS4.", "sections_fts": 61, "rank": null} {"rowid": 14, "title": "Performance and caching", "content": "Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes. \n That said, there are a number of tricks you can use to improve Datasette's performance.", "sections_fts": 61, "rank": null} {"rowid": 15, "title": "Immutable mode", "content": "If you can be certain that a SQLite database file will not be changed by another process you can tell Datasette to open that file in immutable mode . \n Doing so will disable all locking and change detection, which can result in improved query performance. \n This also enables further optimizations relating to HTTP caching, described below. \n To open a file in immutable mode pass it to the datasette command using the -i option: \n datasette -i data.db \n When you open a file in immutable mode like this Datasette will also calculate and cache the row counts for each table in that database when it first starts up, further improving performance.", "sections_fts": 61, "rank": null} {"rowid": 16, "title": "Using \"datasette inspect\"", "content": "Counting the rows in a table can be a very expensive operation on larger databases. In immutable mode Datasette performs this count only once and caches the results, but this can still cause server startup time to increase by several seconds or more. \n If you know that a database is never going to change you can precalculate the table row counts once and store then in a JSON file, then use that file when you later start the server. \n To create a JSON file containing the calculated row counts for a database, use the following: \n datasette inspect data.db --inspect-file=counts.json \n Then later you can start Datasette against the counts.json file and use it to skip the row counting step and speed up server startup: \n datasette -i data.db --inspect-file=counts.json \n You need to use the -i immutable mode against the database file here or the counts from the JSON file will be ignored. \n You will rarely need to use this optimization in every-day use, but several of the datasette publish commands described in Publishing data use this optimization for better performance when deploying a database file to a hosting provider.", "sections_fts": 61, "rank": null} {"rowid": 17, "title": "HTTP caching", "content": "If your database is immutable and guaranteed not to change, you can gain major performance improvements from Datasette by enabling HTTP caching. \n This can work at two different levels. First, it can tell browsers to cache the results of queries and serve future requests from the browser cache. \n More significantly, it allows you to run Datasette behind a caching proxy such as Varnish or use a cache provided by a hosted service such as Fastly or Cloudflare . This can provide incredible speed-ups since a query only needs to be executed by Datasette the first time it is accessed - all subsequent hits can then be served by the cache. \n Using a caching proxy in this way could enable a Datasette-backed visualization to serve thousands of hits a second while running Datasette itself on extremely inexpensive hosting. \n Datasette's integration with HTTP caches can be enabled using a combination of configuration options and query string arguments. \n The default_cache_ttl setting sets the default HTTP cache TTL for all Datasette pages. This is 5 seconds unless you change it - you can set it to 0 if you wish to disable HTTP caching entirely. \n You can also change the cache timeout on a per-request basis using the ?_ttl=10 query string parameter. This can be useful when you are working with the Datasette JSON API - you may decide that a specific query can be cached for a longer time, or maybe you need to set ?_ttl=0 for some requests for example if you are running a SQL order by random() query.", "sections_fts": 61, "rank": null} {"rowid": 18, "title": "datasette-hashed-urls", "content": "If you open a database file in immutable mode using the -i option, you can be assured that the content of that database will not change for the lifetime of the Datasette server. \n The datasette-hashed-urls plugin implements an optimization where your database is served with part of the SHA-256 hash of the database contents baked into the URL. \n A database at /fixtures will instead be served at /fixtures-aa7318b , and a year-long cache expiry header will be returned with those pages. \n This will then be cached by both browsers and caching proxies such as Cloudflare or Fastly, providing a potentially significant performance boost. \n To install the plugin, run the following: \n datasette install datasette-hashed-urls \n \n Prior to Datasette 0.61 hashed URL mode was a core Datasette feature, enabled using the hash_urls setting. This implementation has now been removed in favor of the datasette-hashed-urls plugin. \n Prior to Datasette 0.28 hashed URL mode was the default behaviour for Datasette, since all database files were assumed to be immutable and unchanging. From 0.28 onwards the default has been to treat database files as mutable unless explicitly configured otherwise.", "sections_fts": 61, "rank": null} {"rowid": 19, "title": "Configuration", "content": "Datasette offers several ways to configure your Datasette instances: server settings, plugin configuration, authentication, and more. \n Most configuration can be handled using a datasette.yaml configuration file, passed to datasette using the -c/--config flag: \n datasette mydatabase.db --config datasette.yaml \n This file can also use JSON, as datasette.json . YAML is recommended over JSON due to its support for comments and multi-line strings.", "sections_fts": 61, "rank": null} {"rowid": 20, "title": "Configuration via the command-line", "content": "The recommended way to configure Datasette is using a datasette.yaml file passed to -c/--config . You can also pass individual settings to Datasette using the -s/--setting option, which can be used multiple times: \n datasette mydatabase.db \\\n --setting settings.default_page_size 50 \\\n --setting settings.sql_time_limit_ms 3500 \n This option takes dotted-notation for the first argument and a value for the second argument. This means you can use it to set any configuration value that would be valid in a datasette.yaml file. \n It also works for plugin configuration, for example for datasette-cluster-map : \n datasette mydatabase.db \\\n --setting plugins.datasette-cluster-map.latitude_column xlat \\\n --setting plugins.datasette-cluster-map.longitude_column xlon \n If the value you provide is a valid JSON object or list it will be treated as nested data, allowing you to configure plugins that accept lists such as datasette-proxy-url : \n datasette mydatabase.db \\\n -s plugins.datasette-proxy-url.paths '[{\"path\": \"/proxy\", \"backend\": \"http://example.com/\"}]' \n This is equivalent to a datasette.yaml file containing the following: \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n plugins:\n datasette-proxy-url:\n paths:\n - path: /proxy\n backend: http://example.com/\n \"\"\").strip()\n ) \n ]]] \n [[[end]]]", "sections_fts": 61, "rank": null} {"rowid": 21, "title": null, "content": "The following example shows some of the valid configuration options that can exist inside datasette.yaml . \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n # Datasette settings block\n settings:\n default_page_size: 50\n sql_time_limit_ms: 3500\n max_returned_rows: 2000\n\n # top-level plugin configuration\n plugins:\n datasette-my-plugin:\n key: valueA\n\n # Database and table-level configuration\n databases:\n your_db_name:\n # plugin configuration for the your_db_name database\n plugins:\n datasette-my-plugin:\n key: valueA\n tables:\n your_table_name:\n allow:\n # Only the root user can access this table\n id: root\n # plugin configuration for the your_table_name table\n # inside your_db_name database\n plugins:\n datasette-my-plugin:\n key: valueB\n \"\"\")\n ) \n ]]] \n [[[end]]]", "sections_fts": 61, "rank": null} {"rowid": 22, "title": "Settings", "content": "Settings can be configured in datasette.yaml with the settings key: \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n # inside datasette.yaml\n settings:\n default_allow_sql: off\n default_page_size: 50\n \"\"\").strip()\n ) \n ]]] \n [[[end]]] \n The full list of settings is available in the settings documentation . Settings can also be passed to Datasette using one or more --setting name value command line options.`", "sections_fts": 61, "rank": null} {"rowid": 23, "title": "Plugin configuration", "content": "Datasette plugins often require configuration. This plugin configuration should be placed in plugins keys inside datasette.yaml . \n Most plugins are configured at the top-level of the file, using the plugins key: \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n # inside datasette.yaml\n plugins:\n datasette-my-plugin:\n key: my_value\n \"\"\").strip()\n ) \n ]]] \n [[[end]]] \n Some plugins can be configured at the database or table level. These should use a plugins key nested under the appropriate place within the databases object: \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n # inside datasette.yaml\n databases:\n my_database:\n # plugin configuration for the my_database database\n plugins:\n datasette-my-plugin:\n key: my_value\n my_other_database:\n tables:\n my_table:\n # plugin configuration for the my_table table inside the my_other_database database\n plugins:\n datasette-my-plugin:\n key: my_value\n \"\"\").strip()\n ) \n ]]] \n [[[end]]]", "sections_fts": 61, "rank": null} {"rowid": 24, "title": "Permissions configuration", "content": "Datasette's authentication and permissions system can also be configured using datasette.yaml . \n Here is a simple example: \n [[[cog\nfrom metadata_doc import config_example\nimport textwrap\nconfig_example(cog, textwrap.dedent(\n \"\"\"\n # Instance is only available to users 'sharon' and 'percy':\n allow:\n id:\n - sharon\n - percy\n\n # Only 'percy' is allowed access to the accounting database:\n databases:\n accounting:\n allow:\n id: percy\n \"\"\").strip()\n ) \n ]]] \n [[[end]]] \n Access permissions in datasette.yaml has the full details.", "sections_fts": 61, "rank": null} {"rowid": 25, "title": "Canned queries configuration", "content": "Canned queries are named SQL queries that appear in the Datasette interface. They can be configured in datasette.yaml using the queries key at the database level: \n [[[cog\nfrom metadata_doc import config_example, config_example\nconfig_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"queries\": {\n \"just_species\": {\n \"sql\": \"select qSpecies from Street_Tree_List\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n See the canned queries documentation for more, including how to configure writable canned queries .", "sections_fts": 61, "rank": null} {"rowid": 26, "title": "Custom CSS and JavaScript", "content": "Datasette can load additional CSS and JavaScript files, configured in datasette.yaml like this: \n [[[cog\nfrom metadata_doc import config_example\nconfig_example(cog, \"\"\"\n extra_css_urls:\n - https://simonwillison.net/static/css/all.bf8cd891642c.css\n extra_js_urls:\n - https://code.jquery.com/jquery-3.2.1.slim.min.js\n\"\"\") \n ]]] \n [[[end]]] \n The extra CSS and JavaScript files will be linked in the
of every page: \n \n \n You can also specify a SRI (subresource integrity hash) for these assets: \n [[[cog\nconfig_example(cog, \"\"\"\n extra_css_urls:\n - url: https://simonwillison.net/static/css/all.bf8cd891642c.css\n sri: sha384-9qIZekWUyjCyDIf2YK1FRoKiPJq4PHt6tp/ulnuuyRBvazd0hG7pWbE99zvwSznI\n extra_js_urls:\n - url: https://code.jquery.com/jquery-3.2.1.slim.min.js\n sri: sha256-k2WSCIexGzOj3Euiig+TlR8gA0EmPjuc79OEeY5L45g=\n\"\"\") \n ]]] \n [[[end]]] \n This will produce: \n \n \n Modern browsers will only execute the stylesheet or JavaScript if the SRI hash\n matches the content served. You can generate hashes using www.srihash.org \n Items in \"extra_js_urls\" can specify \"module\": true if they reference JavaScript that uses JavaScript modules . This configuration: \n [[[cog\nconfig_example(cog, \"\"\"\n extra_js_urls:\n - url: https://example.datasette.io/module.js\n module: true\n\"\"\") \n ]]] \n [[[end]]] \n Will produce this HTML: \n ", "sections_fts": 61, "rank": null} {"rowid": 27, "title": "Installation", "content": "If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch \n \n There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. \n If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . \n \n \n \n Basic installation \n \n \n Datasette Desktop for Mac \n \n \n Using Homebrew \n \n \n Using pip \n \n \n \n \n Advanced installation options \n \n \n Using pipx \n \n \n Installing plugins using pipx \n \n \n Upgrading packages using pipx \n \n \n \n \n Using Docker \n \n \n Loading SpatiaLite \n \n \n Installing plugins \n \n \n \n \n \n \n A note about extensions", "sections_fts": 61, "rank": null} {"rowid": 28, "title": "Basic installation", "content": "", "sections_fts": 61, "rank": null} {"rowid": 29, "title": "Datasette Desktop for Mac", "content": "Datasette Desktop is a packaged Mac application which bundles Datasette together with Python and allows you to install and run Datasette directly on your laptop. This is the best option for local installation if you are not comfortable using the command line.", "sections_fts": 61, "rank": null} {"rowid": 30, "title": "Using Homebrew", "content": "If you have a Mac and use Homebrew , you can install Datasette by running this command in your terminal: \n brew install datasette \n This should install the latest version. You can confirm by running: \n datasette --version \n You can upgrade to the latest Homebrew packaged version using: \n brew upgrade datasette \n Once you have installed Datasette you can install plugins using the following: \n datasette install datasette-vega \n If the latest packaged release of Datasette has not yet been made available through Homebrew, you can upgrade your Homebrew installation in-place using: \n datasette install -U datasette", "sections_fts": 61, "rank": null} {"rowid": 31, "title": "Using pip", "content": "Datasette requires Python 3.8 or higher. The Python.org Python For Beginners page has instructions for getting started. \n You can install Datasette and its dependencies using pip : \n pip install datasette \n You can now run Datasette like so: \n datasette", "sections_fts": 61, "rank": null} {"rowid": 32, "title": "Advanced installation options", "content": "", "sections_fts": 61, "rank": null} {"rowid": 33, "title": "Using pipx", "content": "pipx is a tool for installing Python software with all of its dependencies in an isolated environment, to ensure that they will not conflict with any other installed Python software. \n If you use Homebrew on macOS you can install pipx like this: \n brew install pipx\npipx ensurepath \n Without Homebrew you can install it like so: \n python3 -m pip install --user pipx\npython3 -m pipx ensurepath \n The pipx ensurepath command configures your shell to ensure it can find commands that have been installed by pipx - generally by making sure ~/.local/bin has been added to your PATH . \n Once pipx is installed you can use it to install Datasette like this: \n pipx install datasette \n Then run datasette --version to confirm that it has been successfully installed.", "sections_fts": 61, "rank": null} {"rowid": 34, "title": "Installing plugins using pipx", "content": "You can install additional datasette plugins with pipx inject like so: \n pipx inject datasette datasette-json-html \n injected package datasette-json-html into venv datasette\ndone! \u2728 \ud83c\udf1f \u2728 \n Then to confirm the plugin was installed correctly: \n datasette plugins \n [\n {\n \"name\": \"datasette-json-html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]", "sections_fts": 61, "rank": null} {"rowid": 35, "title": "Upgrading packages using pipx", "content": "You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : \n pipx upgrade datasette \n upgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) \n To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: \n datasette plugins \n [\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n] \n Now upgrade the plugin: \n pipx runpip datasette install -U datasette-vega-0 \n Collecting datasette-vega\nDownloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB)\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.8 MB 2.0 MB/s\n...\nInstalling collected packages: datasette-vega\nAttempting uninstall: datasette-vega\n Found existing installation: datasette-vega 0.6\n Uninstalling datasette-vega-0.6:\n Successfully uninstalled datasette-vega-0.6\nSuccessfully installed datasette-vega-0.6.2 \n To confirm the upgrade: \n datasette plugins \n [\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6.2\"\n }\n]", "sections_fts": 61, "rank": null} {"rowid": 36, "title": "Using Docker", "content": "A Docker image containing the latest release of Datasette is published to Docker\n Hub here: https://hub.docker.com/r/datasetteproject/datasette/ \n If you have Docker installed (for example with Docker for Mac on OS X) you can download and run this\n image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n This will start an instance of Datasette running on your machine's port 8001,\n serving the fixtures.db file in your current directory. \n Now visit http://127.0.0.1:8001/ to access Datasette. \n (You can download a copy of fixtures.db from\n https://latest.datasette.io/fixtures.db ) \n To upgrade to the most recent release of Datasette, run the following: \n docker pull datasetteproject/datasette", "sections_fts": 61, "rank": null} {"rowid": 37, "title": "Loading SpatiaLite", "content": "The datasetteproject/datasette image includes a recent version of the\n SpatiaLite extension for SQLite. To load and enable that\n module, use the following command: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \\\n --load-extension=spatialite \n You can confirm that SpatiaLite is successfully loaded by visiting\n http://127.0.0.1:8001/-/versions", "sections_fts": 61, "rank": null} {"rowid": 38, "title": "Installing plugins", "content": "If you want to install plugins into your local Datasette Docker image you can do\n so using the following recipe. This will install the plugins and then save a\n brand new local image called datasette-with-plugins : \n docker run datasetteproject/datasette \\\n pip install datasette-vega\n\ndocker commit $(docker ps -lq) datasette-with-plugins \n You can now run the new custom image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasette-with-plugins \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n You can confirm that the plugins are installed by visiting\n http://127.0.0.1:8001/-/plugins \n Some plugins such as datasette-ripgrep may need additional system packages. You can install these by running apt-get install inside the container: \n docker run datasette-057a0 bash -c '\n apt-get update &&\n apt-get install ripgrep &&\n pip install datasette-ripgrep'\n\ndocker commit $(docker ps -lq) datasette-with-ripgrep", "sections_fts": 61, "rank": null} {"rowid": 39, "title": "A note about extensions", "content": "SQLite supports extensions, such as SpatiaLite for geospatial operations. \n These can be loaded using the --load-extension argument, like so: \n datasette --load-extension=/usr/local/lib/mod_spatialite.dylib \n Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension: \n \n Your Python installation does not have the ability to load SQLite extensions. \n \n In some cases you may see the following error message instead: \n AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension' \n On macOS the easiest fix for this is to install Datasette using Homebrew: \n brew install datasette \n Use which datasette to confirm that datasette will run that version. The output should look something like this: \n /usr/local/opt/datasette/bin/datasette \n If you get a different location here such as /Library/Frameworks/Python.framework/Versions/3.10/bin/datasette you can run the following command to cause datasette to execute the Homebrew version instead: \n alias datasette=$(echo $(brew --prefix datasette)/bin/datasette) \n You can undo this operation using: \n unalias datasette \n If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew: \n brew install python \n Then executing Python using: \n /usr/local/opt/python@3/libexec/bin/python \n A more convenient way to work with this version of Python may be to use it to create a virtual environment: \n /usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv \n Then activate it like this: \n source datasette-venv/bin/activate \n Now running python and pip will work against a version of Python 3 that includes support for SQLite extensions: \n pip install datasette\nwhich datasette\ndatasette --version", "sections_fts": 61, "rank": null} {"rowid": 40, "title": "Writing plugins", "content": "You can write one-off plugins that apply to just one Datasette instance, or you can write plugins which can be installed using pip and can be shipped to the Python Package Index ( PyPI ) for other people to install. \n Want to start by looking at an example? The Datasette plugins directory lists more than 90 open source plugins with code you can explore. The plugin hooks page includes links to example plugins for each of the documented hooks.", "sections_fts": 61, "rank": null} {"rowid": 41, "title": "Tracing plugin hooks", "content": "The DATASETTE_TRACE_PLUGINS environment variable turns on detailed tracing showing exactly which hooks are being run. This can be useful for understanding how Datasette is using your plugin. \n DATASETTE_TRACE_PLUGINS=1 datasette mydb.db \n Example output: \n actor_from_request:\n{ 'datasette':