{"id": "internals:datasette-render-template", "page": "internals", "ref": "datasette-render-template", "title": "await .render_template(template, context=None, request=None)", "content": "template - string, list of strings or jinja2.Template \n \n The template file to be rendered, e.g. my_plugin.html . Datasette will search for this file first in the --template-dir= location, if it was specified - then in the plugin's bundled templates and finally in Datasette's set of default templates. \n If this is a list of template file names then the first one that exists will be loaded and rendered. \n If this is a Jinja Template object it will be used directly. \n \n \n \n context - None or a Python dictionary \n \n The context variables to pass to the template. \n \n \n \n request - request object or None \n \n If you pass a Datasette request object here it will be made available to the template. \n \n \n \n Renders a Jinja template using Datasette's preconfigured instance of Jinja and returns the resulting string. The template will have access to Datasette's default template functions and any functions that have been made available by other plugins.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[{\"href\": \"https://jinja.palletsprojects.com/en/2.11.x/api/#jinja2.Template\", \"label\": \"Template object\"}, {\"href\": \"https://jinja.palletsprojects.com/en/2.11.x/\", \"label\": \"Jinja template\"}]"} {"id": "internals:datasette-resolve-database", "page": "internals", "ref": "datasette-resolve-database", "title": ".resolve_database(request)", "content": "request - Request object \n \n A request object \n \n \n \n If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a database named group, you can use this method to resolve the database object. \n This returns a Database instance. \n If the database cannot be found, it raises a datasette.utils.asgi.DatabaseNotFound exception - which is a subclass of datasette.utils.asgi.NotFound with a .database_name attribute set to the name of the database that was requested.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-resolve-row", "page": "internals", "ref": "datasette-resolve-row", "title": ".resolve_row(request)", "content": "request - Request object \n \n A request object \n \n \n \n This method assumes your route declares named groups for database , table and pks . \n It returns a ResolvedRow named tuple instance with the following fields: \n \n \n db - Database \n \n The database object \n \n \n \n table - string \n \n The name of the table \n \n \n \n sql - string \n \n SQL snippet that can be used in a WHERE clause to select the row \n \n \n \n params - dict \n \n Parameters that should be passed to the SQL query \n \n \n \n pks - list \n \n List of primary key column names \n \n \n \n pk_values - list \n \n List of primary key values decoded from the URL \n \n \n \n row - sqlite3.Row \n \n The row itself \n \n \n \n If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. \n If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception. \n If the row cannot be found it raises a datasette.utils.asgi.RowNotFound exception. This has .database_name , .table and .pk_values attributes, extracted from the request path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-resolve-table", "page": "internals", "ref": "datasette-resolve-table", "title": ".resolve_table(request)", "content": "request - Request object \n \n A request object \n \n \n \n This assumes that the regular expression for your route declares both a database and a table named group. \n It returns a ResolvedTable named tuple instance with the following fields: \n \n \n db - Database \n \n The database object \n \n \n \n table - string \n \n The name of the table (or view) \n \n \n \n is_view - boolean \n \n True if this is a view, False if it is a table \n \n \n \n If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. \n If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception - a subclass of datasette.utils.asgi.NotFound with .database_name and .table attributes.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-setting", "page": "internals", "ref": "datasette-setting", "title": ".setting(key)", "content": "key - string \n \n The name of the setting, e.g. base_url . \n \n \n \n Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting. \n For example: \n downloads_are_allowed = datasette.setting(\"allow_download\")", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-sign", "page": "internals", "ref": "datasette-sign", "title": ".sign(value, namespace=\"default\")", "content": "value - any serializable type \n \n The value to be signed. \n \n \n \n namespace - string, optional \n \n An alternative namespace, see the itsdangerous salt documentation . \n \n \n \n Utility method for signing values, such that you can safely pass data to and from an untrusted environment. This is a wrapper around the itsdangerous library. \n This method returns a signed string, which can be decoded and verified using .unsign(value, namespace=\"default\") .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[{\"href\": \"https://itsdangerous.palletsprojects.com/en/1.1.x/serializer/#the-salt\", \"label\": \"itsdangerous salt documentation\"}, {\"href\": \"https://itsdangerous.palletsprojects.com/\", \"label\": \"itsdangerous\"}]"} {"id": "internals:datasette-track-event", "page": "internals", "ref": "datasette-track-event", "title": "await .track_event(event)", "content": "event - Event \n \n An instance of a subclass of datasette.events.Event . \n \n \n \n Plugins can call this to track events, using classes they have previously registered. See Event tracking for details. \n The event will then be passed to all plugins that have registered to receive events using the track_event(datasette, event) hook. \n Example usage, assuming the plugin has previously registered the BanUserEvent class: \n await datasette.track_event(\n BanUserEvent(user={\"id\": 1, \"username\": \"cleverbot\"})\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-unsign", "page": "internals", "ref": "datasette-unsign", "title": ".unsign(value, namespace=\"default\")", "content": "signed - any serializable type \n \n The signed string that was created using .sign(value, namespace=\"default\") . \n \n \n \n namespace - string, optional \n \n The alternative namespace, if one was used. \n \n \n \n Returns the original, decoded object that was passed to .sign(value, namespace=\"default\") . If the signature is not valid this raises a itsdangerous.BadSignature exception.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "deploying:deploying", "page": "deploying", "ref": "deploying", "title": "Deploying Datasette", "content": "The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. \n You can deploy Datasette to other hosting providers using the instructions on this page.", "breadcrumbs": "[]", "references": "[]"} {"id": "deploying:deploying-buildpacks", "page": "deploying", "ref": "deploying-buildpacks", "title": "Deploying using buildpacks", "content": "Some hosting providers such as Heroku , DigitalOcean App Platform and Scalingo support the Buildpacks standard for deploying Python web applications. \n Deploying Datasette on these platforms requires two files: requirements.txt and Procfile . \n The requirements.txt file lets the platform know which Python packages should be installed. It should contain datasette at a minimum, but can also list any Datasette plugins you wish to install - for example: \n datasette\ndatasette-vega \n The Procfile lets the hosting platform know how to run the command that serves web traffic. It should look like this: \n web: datasette . -h 0.0.0.0 -p $PORT --cors \n The $PORT environment variable is provided by the hosting platform. --cors enables CORS requests from JavaScript running on other websites to your domain - omit this if you don't want to allow CORS. You can add additional Datasette Settings options here too. \n These two files should be enough to deploy Datasette on any host that supports buildpacks. Datasette will serve any SQLite files that are included in the root directory of the application. \n If you want to build SQLite files or download them as part of the deployment process you can do so using a bin/post_compile file. For example, the following bin/post_compile will download an example database that will then be served by Datasette: \n wget https://fivethirtyeight.datasettes.com/fivethirtyeight.db \n simonw/buildpack-datasette-demo is an example GitHub repository showing a Datasette configuration that can be deployed to a buildpack-supporting host.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[{\"href\": \"https://www.heroku.com/\", \"label\": \"Heroku\"}, {\"href\": \"https://www.digitalocean.com/docs/app-platform/\", \"label\": \"DigitalOcean App Platform\"}, {\"href\": \"https://scalingo.com/\", \"label\": \"Scalingo\"}, {\"href\": \"https://buildpacks.io/\", \"label\": \"Buildpacks standard\"}, {\"href\": \"https://github.com/simonw/buildpack-datasette-demo\", \"label\": \"simonw/buildpack-datasette-demo\"}]"} {"id": "deploying:deploying-fundamentals", "page": "deploying", "ref": "deploying-fundamentals", "title": "Deployment fundamentals", "content": "Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000. \n If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:deploying-openrc", "page": "deploying", "ref": "deploying-openrc", "title": "Running Datasette using OpenRC", "content": "OpenRC is the service manager on non-systemd Linux distributions like Alpine Linux and Gentoo . \n Create an init script at /etc/init.d/datasette with the following contents: \n #!/sbin/openrc-run\n\nname=\"datasette\"\ncommand=\"datasette\"\ncommand_args=\"serve -h 0.0.0.0 /path/to/db.db\"\ncommand_background=true\npidfile=\"/run/${RC_SVCNAME}.pid\" \n You then need to configure the service to run at boot and start it: \n rc-update add datasette\nrc-service datasette start", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[{\"href\": \"https://www.alpinelinux.org/\", \"label\": \"Alpine Linux\"}, {\"href\": \"https://www.gentoo.org/\", \"label\": \"Gentoo\"}]"} {"id": "plugins:deploying-plugins-using-datasette-publish", "page": "plugins", "ref": "deploying-plugins-using-datasette-publish", "title": "Deploying plugins using datasette publish", "content": "The datasette publish and datasette package commands both take an optional --install argument. You can use this one or more times to tell Datasette to pip install specific plugins as part of the process: \n datasette publish cloudrun mydb.db --install=datasette-vega \n You can use the name of a package on PyPI or any of the other valid arguments to pip install such as a URL to a .zip file: \n datasette publish cloudrun mydb.db \\\n --install=https://url-to-my-package.zip", "breadcrumbs": "[\"Plugins\", \"Installing plugins\"]", "references": "[]"} {"id": "deploying:deploying-proxy", "page": "deploying", "ref": "deploying-proxy", "title": "Running Datasette behind a proxy", "content": "You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. \n You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: \n datasette my-database.db --setting base_url /my-datasette/ -p 8009 \n This will run Datasette with the following URLs: \n \n \n http://127.0.0.1:8009/my-datasette/ - the Datasette homepage \n \n \n http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database \n \n \n http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table \n \n \n You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:deploying-systemd", "page": "deploying", "ref": "deploying-systemd", "title": "Running Datasette using systemd", "content": "You can run Datasette on Ubuntu or Debian systems using systemd . \n First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . \n You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . \n Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . \n You can copy a test database into that folder like so: \n cd /home/ubuntu/datasette-root\ncurl -O https://latest.datasette.io/fixtures.db \n Create a file at /etc/systemd/system/datasette.service with the following contents: \n [Unit]\nDescription=Datasette\nAfter=network.target\n\n[Service]\nType=simple\nUser=ubuntu\nEnvironment=DATASETTE_SECRET=\nWorkingDirectory=/home/ubuntu/datasette-root\nExecStart=datasette serve . -h 127.0.0.1 -p 8000\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target \n Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: \n python3 -c 'import secrets; print(secrets.token_hex(32))' \n This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. \n You can start the Datasette process running using the following: \n sudo systemctl daemon-reload\nsudo systemctl start datasette.service \n You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: \n sudo systemctl restart datasette.service \n Once the service has started you can confirm that Datasette is running on port 8000 like so: \n curl 127.0.0.1:8000/-/versions.json\n# Should output JSON showing the installed version \n Datasette will not be accessible from outside the server because it is listening on 127.0.0.1 . You can expose it by instead listening on 0.0.0.0 , but a better way is to set up a proxy such as nginx - see Running Datasette behind a proxy .", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "contributing:devenvironment", "page": "contributing", "ref": "devenvironment", "title": "Setting up a development environment", "content": "If you have Python 3.8 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps. \n If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account. \n Now clone that repository somewhere on your computer: \n git clone git@github.com:YOURNAME/datasette \n If you want to get started without creating your own fork, you can do this instead: \n git clone git@github.com:simonw/datasette \n The next step is to create a virtual environment for your project and use it to install Datasette's dependencies: \n cd datasette\n# Create a virtual environment in ./venv\npython3 -m venv ./venv\n# Now activate the virtual environment, so pip can install into it\nsource venv/bin/activate\n# Install Datasette and its testing dependencies\npython3 -m pip install -e '.[test]' \n That last line does most of the work: pip install -e means \"install this package in a way that allows me to edit the source code in place\". The .[test] option means \"use the setup.py in this directory and install the optional testing dependencies as well\".", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://docs.python-guide.org/starting/install3/osx/\", \"label\": \"is using homebrew\"}, {\"href\": \"https://github.com/simonw/datasette/fork\", \"label\": \"create a fork of datasette\"}]"} {"id": "changelog:documentation", "page": "changelog", "ref": "documentation", "title": "Documentation", "content": "Documentation describing how to write tests that use signed actor cookies using datasette.client.actor_cookie() . ( #1830 ) \n \n \n Documentation on how to register a plugin for the duration of a test . ( #2234 ) \n \n \n The configuration documentation now shows examples of both YAML and JSON for each setting.", "breadcrumbs": "[\"Changelog\", \"1.0a8 (2024-02-07)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1830\", \"label\": \"#1830\"}, {\"href\": \"https://github.com/simonw/datasette/issues/2234\", \"label\": \"#2234\"}]"} {"id": "ecosystem:dogsheep", "page": "ecosystem", "ref": "dogsheep", "title": "Dogsheep", "content": "Dogsheep is a collection of tools for personal analytics using SQLite and Datasette. The project provides tools like github-to-sqlite and twitter-to-sqlite that can import data from different sources in order to create a personal data warehouse. Personal Data Warehouses: Reclaiming Your Data is a talk that explains Dogsheep and demonstrates it in action.", "breadcrumbs": "[\"The Datasette Ecosystem\"]", "references": "[{\"href\": \"https://dogsheep.github.io/\", \"label\": \"Dogsheep\"}, {\"href\": \"https://datasette.io/tools/github-to-sqlite\", \"label\": \"github-to-sqlite\"}, {\"href\": \"https://datasette.io/tools/twitter-to-sqlite\", \"label\": \"twitter-to-sqlite\"}, {\"href\": \"https://simonwillison.net/2020/Nov/14/personal-data-warehouses/\", \"label\": \"Personal Data Warehouses: Reclaiming Your Data\"}]"} {"id": "ecosystem:ecosystem", "page": "ecosystem", "ref": "ecosystem", "title": "The Datasette Ecosystem", "content": "Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data. \n These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality. \n The Datasette project website includes a directory of plugins and a directory of tools: \n \n \n Plugins directory on datasette.io \n \n \n Tools directory on datasette.io", "breadcrumbs": "[]", "references": "[{\"href\": \"https://datasette.io/\", \"label\": \"Datasette project website\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Plugins directory on datasette.io\"}, {\"href\": \"https://datasette.io/tools\", \"label\": \"Tools directory on datasette.io\"}]"} {"id": "json_api:expand-foreign-keys", "page": "json_api", "ref": "expand-foreign-keys", "title": "Expanding foreign key references", "content": "Datasette can detect foreign key relationships and resolve those references into\n labels. The HTML interface does this by default for every detected foreign key\n column - you can turn that off using ?_labels=off . \n You can request foreign keys be expanded in JSON using the _labels=on or\n _label=COLUMN special query string parameters. Here's what an expanded row\n looks like: \n [\n {\n \"rowid\": 1,\n \"TreeID\": 141565,\n \"qLegalStatus\": {\n \"value\": 1,\n \"label\": \"Permitted Site\"\n },\n \"qSpecies\": {\n \"value\": 1,\n \"label\": \"Myoporum laetum :: Myoporum\"\n },\n \"qAddress\": \"501X Baker St\",\n \"SiteOrder\": 1\n }\n] \n The column in the foreign key table that is used for the label can be specified\n in metadata.json - see Specifying the label column for a table .", "breadcrumbs": "[\"JSON API\"]", "references": "[]"} {"id": "changelog:facet-by-date", "page": "changelog", "ref": "facet-by-date", "title": "Facet by date", "content": "If a column contains datetime values, Datasette can now facet that column by date. ( #481 )", "breadcrumbs": "[\"Changelog\", \"0.29 (2019-07-07)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/481\", \"label\": \"#481\"}]"} {"id": "changelog:faceting", "page": "changelog", "ref": "faceting", "title": "Faceting", "content": "The number of unique values in a facet is now always displayed. Previously it was only displayed if the user specified ?_facet_size=max . ( #1556 ) \n \n \n Facets of type date or array can now be configured in metadata.json , see Facets in metadata . Thanks, David Larlet. ( #1552 ) \n \n \n New ?_nosuggest=1 parameter for table views, which disables facet suggestion. ( #1557 ) \n \n \n Fixed bug where ?_facet_array=tags&_facet=tags would only display one of the two selected facets. ( #625 )", "breadcrumbs": "[\"Changelog\", \"0.60 (2022-01-13)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1556\", \"label\": \"#1556\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1552\", \"label\": \"#1552\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1557\", \"label\": \"#1557\"}, {\"href\": \"https://github.com/simonw/datasette/issues/625\", \"label\": \"#625\"}]"} {"id": "facets:facets-in-query-strings", "page": "facets", "ref": "facets-in-query-strings", "title": "Facets in query strings", "content": "To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL.\n For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this: \n /dbname/tablename?_facet=state&_facet=city_id \n This works for both the HTML interface and the .json view.\n When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this: \n {\n \"state\": {\n \"name\": \"state\",\n \"results\": [\n {\n \"value\": \"CA\",\n \"label\": \"CA\",\n \"count\": 10,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=CA\",\n \"selected\": false\n },\n {\n \"value\": \"MI\",\n \"label\": \"MI\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=MI\",\n \"selected\": false\n },\n {\n \"value\": \"MC\",\n \"label\": \"MC\",\n \"count\": 1,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=MC\",\n \"selected\": false\n }\n ],\n \"truncated\": false\n }\n \"city_id\": {\n \"name\": \"city_id\",\n \"results\": [\n {\n \"value\": 1,\n \"label\": \"San Francisco\",\n \"count\": 6,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=1\",\n \"selected\": false\n },\n {\n \"value\": 2,\n \"label\": \"Los Angeles\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=2\",\n \"selected\": false\n },\n {\n \"value\": 3,\n \"label\": \"Detroit\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=3\",\n \"selected\": false\n },\n {\n \"value\": 4,\n \"label\": \"Memnonia\",\n \"count\": 1,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=4\",\n \"selected\": false\n }\n ],\n \"truncated\": false\n }\n} \n If Datasette detects that a column is a foreign key, the \"label\" property will be automatically derived from the detected label column on the referenced table. \n The default number of facet results returned is 30, controlled by the default_facet_size setting.\n You can increase this on an individual page by adding ?_facet_size=100 to the query string, up to a maximum of max_returned_rows (which defaults to 1000).", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "facets:facets-metadata", "page": "facets", "ref": "facets-metadata", "title": "Facets in metadata", "content": "You can turn facets on by default for specific tables by adding them to a \"facets\" key in a Datasette Metadata file. \n Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"]\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. \n You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this: \n [[[cog\nmetadata_example(cog, {\n \"facets\": [\n {\"array\": \"tags\"},\n {\"date\": \"created\"}\n ]\n}) \n ]]] \n [[[end]]] \n You can change the default facet size (the number of results shown for each facet) for a table using facet_size : \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"],\n \"facet_size\": 10\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "changelog:features", "page": "changelog", "ref": "features", "title": "Features", "content": "Now tested against Python 3.11. Docker containers used by datasette publish and datasette package both now use that version of Python. ( #1853 ) \n \n \n --load-extension option now supports entrypoints. Thanks, Alex Garcia. ( #1789 ) \n \n \n Facet size can now be set per-table with the new facet_size table metadata option. ( #1804 ) \n \n \n The truncate_cells_html setting now also affects long URLs in columns. ( #1805 ) \n \n \n The non-JavaScript SQL editor textarea now increases height to fit the SQL query. ( #1786 ) \n \n \n Facets are now displayed with better line-breaks in long values. Thanks, Daniel Rech. ( #1794 ) \n \n \n The settings.json file used in Configuration directory mode is now validated on startup. ( #1816 ) \n \n \n SQL queries can now include leading SQL comments, using /* ... */ or -- ... syntax. Thanks, Charles Nepote. ( #1860 ) \n \n \n SQL query is now re-displayed when terminated with a time limit error. ( #1819 ) \n \n \n The inspect data mechanism is now used to speed up server startup - thanks, Forest Gregg. ( #1834 ) \n \n \n In Configuration directory mode databases with filenames ending in .sqlite or .sqlite3 are now automatically added to the Datasette instance. ( #1646 ) \n \n \n Breadcrumb navigation display now respects the current user's permissions. ( #1831 )", "breadcrumbs": "[\"Changelog\", \"0.63 (2022-10-27)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/1853\", \"label\": \"#1853\"}, {\"href\": \"https://github.com/simonw/datasette/pull/1789\", \"label\": \"#1789\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1804\", \"label\": \"#1804\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1805\", \"label\": \"#1805\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1786\", \"label\": \"#1786\"}, {\"href\": \"https://github.com/simonw/datasette/pull/1794\", \"label\": \"#1794\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1816\", \"label\": \"#1816\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1860\", \"label\": \"#1860\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1819\", \"label\": \"#1819\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1834\", \"label\": \"#1834\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1646\", \"label\": \"#1646\"}, {\"href\": \"https://github.com/simonw/datasette/issues/1831\", \"label\": \"#1831\"}]"} {"id": "changelog:flash-messages", "page": "changelog", "ref": "flash-messages", "title": "Flash messages", "content": "Writable canned queries needed a mechanism to let the user know that the query has been successfully executed. The new flash messaging system ( #790 ) allows messages to persist in signed cookies which are then displayed to the user on the next page that they visit. Plugins can use this mechanism to display their own messages, see .add_message(request, message, type=datasette.INFO) for details. \n You can try out the new messages using the /-/messages debug tool, for example at https://latest.datasette.io/-/messages", "breadcrumbs": "[\"Changelog\", \"0.44 (2020-06-11)\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/790\", \"label\": \"#790\"}, {\"href\": \"https://latest.datasette.io/-/messages\", \"label\": \"https://latest.datasette.io/-/messages\"}]"} {"id": "changelog:foreign-key-expansions", "page": "changelog", "ref": "foreign-key-expansions", "title": "Foreign key expansions", "content": "When Datasette detects a foreign key reference it attempts to resolve a label\n for that reference (automatically or using the Specifying the label column for a table metadata\n option) so it can display a link to the associated row. \n This expansion is now also available for JSON and CSV representations of the\n table, using the new _labels=on query string option. See\n Expanding foreign key references for more details.", "breadcrumbs": "[\"Changelog\", \"0.23 (2018-06-18)\"]", "references": "[]"} {"id": "sql_queries:fragment", "page": "sql_queries", "ref": "fragment", "title": "fragment", "content": "Some plugins, such as datasette-vega , can be configured by including additional data in the fragment hash of the URL - the bit that comes after a # symbol. \n You can set a default fragment hash that will be included in the link to the canned query from the database index page using the \"fragment\" key. \n This example demonstrates both fragment and hide_sql : \n [[[cog\nconfig_example(cog, \"\"\"\ndatabases:\n fixtures:\n queries:\n neighborhood_search:\n fragment: fragment-goes-here\n hide_sql: true\n sql: |-\n select neighborhood, facet_cities.name, state\n from facetable join facet_cities on facetable.city_id = facet_cities.id\n where neighborhood like '%' || :text || '%' order by neighborhood;\n\"\"\") \n ]]] \n [[[end]]] \n See here for a demo of this in action.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\", \"Additional canned query options\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-vega\", \"label\": \"datasette-vega\"}, {\"href\": \"https://latest.datasette.io/fixtures#queries\", \"label\": \"See here\"}]"} {"id": "full_text_search:full-text-search-advanced-queries", "page": "full_text_search", "ref": "full-text-search-advanced-queries", "title": "Advanced SQLite search queries", "content": "SQLite full-text search includes support for a variety of advanced queries , including AND , OR , NOT and NEAR . \n By default Datasette disables these features to ensure they do not cause errors or confusion for users who are not aware of them. You can disable this escaping and use the advanced queries by adding &_searchmode=raw to the table page query string. \n If you want to enable these operators by default for a specific table, you can do so by adding \"searchmode\": \"raw\" to the metadata configuration for that table, see Configuring full-text search for a table or view . \n If that option has been specified in the table metadata but you want to over-ride it and return to the default behavior you can append &_searchmode=escaped to the query string.", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://www.sqlite.org/fts5.html#full_text_query_syntax\", \"label\": \"a variety of advanced queries\"}]"} {"id": "full_text_search:full-text-search-custom-sql", "page": "full_text_search", "ref": "full-text-search-custom-sql", "title": "Searches using custom SQL", "content": "You can include full-text search results in custom SQL queries. The general pattern with SQLite search is to run the search as a sub-select that returns rowid values, then include those rowids in another part of the query. \n You can see the syntax for a basic search by running that search on a table page and then clicking \"View and edit SQL\" to see the underlying SQL. For example, consider this search for manafort is the US FARA database : \n /fara/FARA_All_ShortForms?_search=manafort \n If you click View and edit SQL you'll see that the underlying SQL looks like this: \n select\n rowid,\n Short_Form_Termination_Date,\n Short_Form_Date,\n Short_Form_Last_Name,\n Short_Form_First_Name,\n Registration_Number,\n Registration_Date,\n Registrant_Name,\n Address_1,\n Address_2,\n City,\n State,\n Zip\nfrom\n FARA_All_ShortForms\nwhere\n rowid in (\n select\n rowid\n from\n FARA_All_ShortForms_fts\n where\n FARA_All_ShortForms_fts match escape_fts(:search)\n )\norder by\n rowid\nlimit\n 101", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"manafort is the US FARA database\"}, {\"href\": \"https://fara.datasettes.com/fara?sql=select%0D%0A++rowid%2C%0D%0A++Short_Form_Termination_Date%2C%0D%0A++Short_Form_Date%2C%0D%0A++Short_Form_Last_Name%2C%0D%0A++Short_Form_First_Name%2C%0D%0A++Registration_Number%2C%0D%0A++Registration_Date%2C%0D%0A++Registrant_Name%2C%0D%0A++Address_1%2C%0D%0A++Address_2%2C%0D%0A++City%2C%0D%0A++State%2C%0D%0A++Zip%0D%0Afrom%0D%0A++FARA_All_ShortForms%0D%0Awhere%0D%0A++rowid+in+%28%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++FARA_All_ShortForms_fts%0D%0A++++where%0D%0A++++++FARA_All_ShortForms_fts+match+escape_fts%28%3Asearch%29%0D%0A++%29%0D%0Aorder+by%0D%0A++rowid%0D%0Alimit%0D%0A++101&search=manafort\", \"label\": \"View and edit SQL\"}]"} {"id": "full_text_search:full-text-search-enabling", "page": "full_text_search", "ref": "full-text-search-enabling", "title": "Enabling full-text search for a SQLite table", "content": "Datasette takes advantage of the external content mechanism in SQLite, which allows a full-text search virtual table to be associated with the contents of another SQLite table. \n To set up full-text search for a table, you need to do two things: \n \n \n Create a new FTS virtual table associated with your table \n \n \n Populate that FTS table with the data that you would like to be able to run searches against", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://www.sqlite.org/fts3.html#_external_content_fts4_tables_\", \"label\": \"external content\"}]"} {"id": "full_text_search:full-text-search-fts-versions", "page": "full_text_search", "ref": "full-text-search-fts-versions", "title": "FTS versions", "content": "There are three different versions of the SQLite FTS module: FTS3, FTS4 and FTS5. You can tell which versions are supported by your instance of Datasette by checking the /-/versions page. \n FTS5 is the most advanced module but may not be available in the SQLite version that is bundled with your Python installation. Most importantly, FTS5 is the only version that has the ability to order by search relevance without needing extra code. \n If you can't be sure that FTS5 will be available, you should use FTS4.", "breadcrumbs": "[\"Full-text search\"]", "references": "[]"} {"id": "full_text_search:full-text-search-table-or-view", "page": "full_text_search", "ref": "full-text-search-table-or-view", "title": "Configuring full-text search for a table or view", "content": "If a table has a corresponding FTS table set up using the content= argument to CREATE VIRTUAL TABLE shown below, Datasette will detect it automatically and add a search interface to the table page for that table. \n You can also manually configure which table should be used for full-text search using query string parameters or Metadata . You can set the associated FTS table for a specific table and you can also set one for a view - if you do that, the page for that SQL view will offer a search option. \n Use ?_fts_table=x to over-ride the FTS table for a specific page. If the primary key was something other than rowid you can use ?_fts_pk=col to set that as well. This is particularly useful for views, for example: \n https://latest.datasette.io/fixtures/searchable_view?_fts_table=searchable_fts&_fts_pk=pk \n The fts_table metadata property can be used to specify an associated FTS table. If the primary key column in your table which was used to populate the FTS table is something other than rowid , you can specify the column to use with the fts_pk property. \n The \"searchmode\": \"raw\" property can be used to default the table to accepting SQLite advanced search operators, as described in Advanced SQLite search queries . \n Here is an example which enables full-text search (with SQLite advanced search operators) for a display_ads view which is defined against the ads table and hence needs to run FTS against the ads_fts table, using the id as the primary key: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"russian-ads\": {\n \"tables\": {\n \"display_ads\": {\n \"fts_table\": \"ads_fts\",\n \"fts_pk\": \"id\",\n \"searchmode\": \"raw\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/searchable_view?_fts_table=searchable_fts&_fts_pk=pk\", \"label\": \"https://latest.datasette.io/fixtures/searchable_view?_fts_table=searchable_fts&_fts_pk=pk\"}]"} {"id": "full_text_search:full-text-search-table-view-api", "page": "full_text_search", "ref": "full-text-search-table-view-api", "title": "The table page and table view API", "content": "Table views that support full-text search can be queried using the ?_search=TERMS query string parameter. This will run the search against content from all of the columns that have been included in the index. \n Try this example: fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort \n SQLite full-text search supports wildcards. This means you can easily implement prefix auto-complete by including an asterisk at the end of the search term - for example: \n /dbname/tablename/?_search=rob* \n This will return all records containing at least one word that starts with the letters rob . \n You can also run searches against just the content of a specific named column by using _search_COLNAME=TERMS - for example, this would search for just rows where the name column in the FTS index mentions Sarah : \n /dbname/tablename/?_search_name=Sarah", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\"}]"} {"id": "contributing:general-guidelines", "page": "contributing", "ref": "general-guidelines", "title": "General guidelines", "content": "main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. \n \n \n The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. \n \n \n New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them.", "breadcrumbs": "[\"Contributing\"]", "references": "[]"} {"id": "getting_started:getting-started", "page": "getting_started", "ref": "getting-started", "title": "Getting started", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "getting_started:getting-started-datasette-lite", "page": "getting_started", "ref": "getting-started-datasette-lite", "title": "Datasette in your browser with Datasette Lite", "content": "Datasette Lite is Datasette packaged using WebAssembly so that it runs entirely in your browser, no Python web application server required. \n You can pass a URL to a CSV, SQLite or raw SQL file directly to Datasette Lite to explore that data in your browser. \n This example link opens Datasette Lite and loads the SQL Murder Mystery example database from Northwestern University Knight Lab .", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://lite.datasette.io/\", \"label\": \"Datasette Lite\"}, {\"href\": \"https://lite.datasette.io/?url=https%3A%2F%2Fraw.githubusercontent.com%2FNUKnightLab%2Fsql-mysteries%2Fmaster%2Fsql-murder-mystery.db#/sql-murder-mystery\", \"label\": \"example link\"}, {\"href\": \"https://github.com/NUKnightLab/sql-mysteries\", \"label\": \"Northwestern University Knight Lab\"}]"} {"id": "getting_started:getting-started-demo", "page": "getting_started", "ref": "getting-started-demo", "title": "Play with a live demo", "content": "The best way to experience Datasette for the first time is with a demo: \n \n \n global-power-plants.datasettes.com provides a searchable database of power plants around the world, using data from the World Resources Institude rendered using the datasette-cluster-map plugin. \n \n \n fivethirtyeight.datasettes.com shows Datasette running against over 400 datasets imported from the FiveThirtyEight GitHub repository .", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://global-power-plants.datasettes.com/global-power-plants/global-power-plants\", \"label\": \"global-power-plants.datasettes.com\"}, {\"href\": \"https://www.wri.org/publication/global-power-plant-database\", \"label\": \"World Resources Institude\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight\", \"label\": \"fivethirtyeight.datasettes.com\"}, {\"href\": \"https://github.com/fivethirtyeight/data\", \"label\": \"FiveThirtyEight GitHub repository\"}]"} {"id": "getting_started:getting-started-glitch", "page": "getting_started", "ref": "getting-started-glitch", "title": "Try Datasette without installing anything using Glitch", "content": "Glitch is a free online tool for building web apps directly from your web browser. You can use Glitch to try out Datasette without needing to install any software on your own computer. \n Here's a demo project on Glitch which you can use as the basis for your own experiments: \n glitch.com/~datasette-csvs \n Glitch allows you to \"remix\" any project to create your own copy and start editing it in your browser. You can remix the datasette-csvs project by clicking this button: \n \n Find a CSV file and drag it onto the Glitch file explorer panel - datasette-csvs will automatically convert it to a SQLite database (using sqlite-utils ) and allow you to start exploring it using Datasette. \n If your CSV file has a latitude and longitude column you can visualize it on a map by uncommenting the datasette-cluster-map line in the requirements.txt file using the Glitch file editor. \n Need some data? Try this Public Art Data for the city of Seattle - hit \"Export\" and select \"CSV\" to download it as a CSV file. \n For more on how this works, see Running Datasette on Glitch .", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://glitch.com/\", \"label\": \"Glitch\"}, {\"href\": \"https://glitch.com/~datasette-csvs\", \"label\": \"glitch.com/~datasette-csvs\"}, {\"href\": \"https://glitch.com/edit/#!/remix/datasette-csvs\", \"label\": null}, {\"href\": \"https://github.com/simonw/sqlite-utils\", \"label\": \"sqlite-utils\"}, {\"href\": \"https://data.seattle.gov/Community/Public-Art-Data/j7sn-tdzk\", \"label\": \"Public Art Data\"}, {\"href\": \"https://simonwillison.net/2019/Apr/23/datasette-glitch/\", \"label\": \"Running Datasette on Glitch\"}]"} {"id": "getting_started:getting-started-tutorial", "page": "getting_started", "ref": "getting-started-tutorial", "title": "Follow a tutorial", "content": "Datasette has several tutorials to help you get started with the tool. Try one of the following: \n \n \n Exploring a database with Datasette shows how to use the Datasette web interface to explore a new database. \n \n \n Learn SQL with Datasette introduces SQL, and shows how to use that query language to ask questions of your data. \n \n \n Cleaning data with sqlite-utils and Datasette guides you through using sqlite-utils to turn a CSV file into a database that you can explore using Datasette.", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://datasette.io/tutorials\", \"label\": \"tutorials\"}, {\"href\": \"https://datasette.io/tutorials/explore\", \"label\": \"Exploring a database with Datasette\"}, {\"href\": \"https://datasette.io/tutorials/learn-sql\", \"label\": \"Learn SQL with Datasette\"}, {\"href\": \"https://datasette.io/tutorials/clean-data\", \"label\": \"Cleaning data with sqlite-utils and Datasette\"}, {\"href\": \"https://sqlite-utils.datasette.io/\", \"label\": \"sqlite-utils\"}]"} {"id": "getting_started:getting-started-your-computer", "page": "getting_started", "ref": "getting-started-your-computer", "title": "Using Datasette on your own computer", "content": "First, follow the Installation instructions. Now you can run Datasette against a SQLite file on your computer using the following command: \n datasette path/to/database.db \n This will start a web server on port 8001 - visit http://localhost:8001/ \n to access the web interface. \n Add -o to open your browser automatically once Datasette has started: \n datasette path/to/database.db -o \n Use Chrome on OS X? You can run datasette against your browser history\n like so: \n datasette ~/Library/Application\\ Support/Google/Chrome/Default/History --nolock \n The --nolock option ignores any file locks. This is safe as Datasette will open the file in read-only mode. \n Now visiting http://localhost:8001/History/downloads will show you a web\n interface to browse your downloads data: \n \n \n \n http://localhost:8001/History/downloads.json will return that data as\n JSON: \n {\n \"database\": \"History\",\n \"columns\": [\n \"id\",\n \"current_path\",\n \"target_path\",\n \"start_time\",\n \"received_bytes\",\n \"total_bytes\",\n ...\n ],\n \"rows\": [\n [\n 1,\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n 13097290269022132,\n 626688,\n 0,\n ...\n ]\n ]\n} \n http://localhost:8001/History/downloads.json?_shape=objects will return that data as\n JSON in a more convenient format: \n {\n ...\n \"rows\": [\n {\n \"start_time\": 13097290269022132,\n \"interrupt_reason\": 0,\n \"hash\": \"\",\n \"id\": 1,\n \"site_url\": \"\",\n \"referrer\": \"https://www.dropbox.com/downloading?src=index\",\n ...\n }\n ]\n}", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"http://localhost:8001/\", \"label\": \"http://localhost:8001/\"}, {\"href\": \"http://localhost:8001/History/downloads\", \"label\": \"http://localhost:8001/History/downloads\"}, {\"href\": \"http://localhost:8001/History/downloads.json\", \"label\": \"http://localhost:8001/History/downloads.json\"}, {\"href\": \"http://localhost:8001/History/downloads.json?_shape=objects\", \"label\": \"http://localhost:8001/History/downloads.json?_shape=objects\"}]"} {"id": "sql_queries:hide-sql", "page": "sql_queries", "ref": "hide-sql", "title": "hide_sql", "content": "Canned queries default to displaying their SQL query at the top of the page. If the query is extremely long you may want to hide it by default, with a \"show\" link that can be used to make it visible. \n Add the \"hide_sql\": true option to hide the SQL query by default.", "breadcrumbs": "[\"Running SQL queries\", \"Canned queries\", \"Additional canned query options\"]", "references": "[]"} {"id": "performance:http-caching", "page": "performance", "ref": "http-caching", "title": "HTTP caching", "content": "If your database is immutable and guaranteed not to change, you can gain major performance improvements from Datasette by enabling HTTP caching. \n This can work at two different levels. First, it can tell browsers to cache the results of queries and serve future requests from the browser cache. \n More significantly, it allows you to run Datasette behind a caching proxy such as Varnish or use a cache provided by a hosted service such as Fastly or Cloudflare . This can provide incredible speed-ups since a query only needs to be executed by Datasette the first time it is accessed - all subsequent hits can then be served by the cache. \n Using a caching proxy in this way could enable a Datasette-backed visualization to serve thousands of hits a second while running Datasette itself on extremely inexpensive hosting. \n Datasette's integration with HTTP caches can be enabled using a combination of configuration options and query string arguments. \n The default_cache_ttl setting sets the default HTTP cache TTL for all Datasette pages. This is 5 seconds unless you change it - you can set it to 0 if you wish to disable HTTP caching entirely. \n You can also change the cache timeout on a per-request basis using the ?_ttl=10 query string parameter. This can be useful when you are working with the Datasette JSON API - you may decide that a specific query can be cached for a longer time, or maybe you need to set ?_ttl=0 for some requests for example if you are running a SQL order by random() query.", "breadcrumbs": "[\"Performance and caching\"]", "references": "[{\"href\": \"https://varnish-cache.org/\", \"label\": \"Varnish\"}, {\"href\": \"https://www.fastly.com/\", \"label\": \"Fastly\"}, {\"href\": \"https://www.cloudflare.com/\", \"label\": \"Cloudflare\"}]"} {"id": "authentication:id1", "page": "authentication", "ref": "id1", "title": "Built-in permissions", "content": "This section lists all of the permission checks that are carried out by Datasette core, along with the resource if it was passed.", "breadcrumbs": "[\"Authentication and permissions\"]", "references": "[]"} {"id": "changelog:id1", "page": "changelog", "ref": "id1", "title": "Changelog", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "cli-reference:id1", "page": "cli-reference", "ref": "id1", "title": "CLI reference", "content": "The datasette CLI tool provides a number of commands. \n Running datasette without specifying a command runs the default command, datasette serve . See datasette serve for the full list of options for that command. \n [[[cog\nfrom datasette import cli\nfrom click.testing import CliRunner\nimport textwrap\ndef help(args):\n title = \"datasette \" + \" \".join(args)\n cog.out(\"\\n::\\n\\n\")\n result = CliRunner().invoke(cli.cli, args)\n output = result.output.replace(\"Usage: cli \", \"Usage: datasette \")\n cog.out(textwrap.indent(output, ' '))\n cog.out(\"\\n\\n\") \n ]]] \n [[[end]]]", "breadcrumbs": "[]", "references": "[]"} {"id": "configuration:id1", "page": "configuration", "ref": "id1", "title": "Configuration", "content": "Datasette offers several ways to configure your Datasette instances: server settings, plugin configuration, authentication, and more. \n Most configuration can be handled using a datasette.yaml configuration file, passed to datasette using the -c/--config flag: \n datasette mydatabase.db --config datasette.yaml \n This file can also use JSON, as datasette.json . YAML is recommended over JSON due to its support for comments and multi-line strings.", "breadcrumbs": "[]", "references": "[]"} {"id": "contributing:id1", "page": "contributing", "ref": "id1", "title": "Contributing", "content": "Datasette is an open source project. We welcome contributions! \n This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins .", "breadcrumbs": "[]", "references": "[]"} {"id": "csv_export:id1", "page": "csv_export", "ref": "id1", "title": "CSV export", "content": "Any Datasette table, view or custom SQL query can be exported as CSV. \n To obtain the CSV representation of the table you are looking, click the \"this\n data as CSV\" link. \n You can also use the advanced export form for more control over the resulting\n file, which looks like this and has the following options: \n \n \n \n download file - instead of displaying CSV in your browser, this forces\n your browser to download the CSV to your downloads directory. \n \n \n expand labels - if your table has any foreign key references this option\n will cause the CSV to gain additional COLUMN_NAME_label columns with a\n label for each foreign key derived from the linked table. In this example \n the city_id column is accompanied by a city_id_label column. \n \n \n stream all rows - by default CSV files only contain the first\n max_returned_rows records. This option will cause Datasette to\n loop through every matching record and return them as a single CSV file. \n \n \n You can try that out on https://latest.datasette.io/fixtures/facetable?_size=4", "breadcrumbs": "[]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/facetable.csv?_labels=on&_size=max\", \"label\": \"In this example\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable?_size=4\", \"label\": \"https://latest.datasette.io/fixtures/facetable?_size=4\"}]"} {"id": "custom_templates:id1", "page": "custom_templates", "ref": "id1", "title": "Custom pages", "content": "You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory. \n For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this: \n datasette mydb.db --template-dir=templates/ \n You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html .", "breadcrumbs": "[\"Custom pages and templates\", \"Publishing static assets\"]", "references": "[]"} {"id": "events:id1", "page": "events", "ref": "id1", "title": "Events", "content": "Datasette includes a mechanism for tracking events that occur while the software is running. This is primarily intended to be used by plugins, which can both trigger events and listen for events. \n The core Datasette application triggers events when certain things happen. This page describes those events. \n Plugins can listen for events using the track_event(datasette, event) plugin hook, which will be called with instances of the following classes - or additional classes registered by other plugins . \n \n \n \n \n class datasette.events. LoginEvent actor : dict | None \n \n Event name: login \n A user (represented by event.actor ) has logged in. \n \n \n \n \n class datasette.events. LogoutEvent actor : dict | None \n \n Event name: logout \n A user (represented by event.actor ) has logged out. \n \n \n \n \n class datasette.events. CreateTokenEvent actor : dict | None expires_after : int | None restrict_all : list restrict_database : dict restrict_resource : dict \n \n Event name: create-token \n A user created an API token. \n \n \n Variables \n \n \n \n expires_after -- Number of seconds after which this token will expire. \n \n \n restrict_all -- Restricted permissions for this token. \n \n \n restrict_database -- Restricted database permissions for this token. \n \n \n restrict_resource -- Restricted resource permissions for this token. \n \n \n \n \n \n \n \n \n \n class datasette.events. CreateTableEvent actor : dict | None database : str table : str schema : str \n \n Event name: create-table \n A new table has been created in the database. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was created. \n \n \n table -- The name of the table that was created \n \n \n schema -- The SQL schema definition for the new table. \n \n \n \n \n \n \n \n \n \n class datasette.events. DropTableEvent actor : dict | None database : str table : str \n \n Event name: drop-table \n A table has been dropped from the database. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was dropped. \n \n \n table -- The name of the table that was dropped \n \n \n \n \n \n \n \n \n \n class datasette.events. AlterTableEvent actor : dict | None database : str table : str before_schema : str after_schema : str \n \n Event name: alter-table \n A table has been altered. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was altered \n \n \n table -- The name of the table that was altered \n \n \n before_schema -- The table's SQL schema before the alteration \n \n \n after_schema -- The table's SQL schema after the alteration \n \n \n \n \n \n \n \n \n \n class datasette.events. InsertRowsEvent actor : dict | None database : str table : str num_rows : int ignore : bool replace : bool \n \n Event name: insert-rows \n Rows were inserted into a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the rows were inserted. \n \n \n table -- The name of the table where the rows were inserted. \n \n \n num_rows -- The number of rows that were requested to be inserted. \n \n \n ignore -- Was ignore set? \n \n \n replace -- Was replace set? \n \n \n \n \n \n \n \n \n \n class datasette.events. UpsertRowsEvent actor : dict | None database : str table : str num_rows : int \n \n Event name: upsert-rows \n Rows were upserted into a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the rows were inserted. \n \n \n table -- The name of the table where the rows were inserted. \n \n \n num_rows -- The number of rows that were requested to be inserted. \n \n \n \n \n \n \n \n \n \n class datasette.events. UpdateRowEvent actor : dict | None database : str table : str pks : list \n \n Event name: update-row \n A row was updated in a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the row was updated. \n \n \n table -- The name of the table where the row was updated. \n \n \n pks -- The primary key values of the updated row. \n \n \n \n \n \n \n \n \n \n class datasette.events. DeleteRowEvent actor : dict | None database : str table : str pks : list \n \n Event name: delete-row \n A row was deleted from a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the row was deleted. \n \n \n table -- The name of the table where the row was deleted. \n \n \n pks -- The primary key values of the deleted row.", "breadcrumbs": "[]", "references": "[]"} {"id": "facets:id1", "page": "facets", "ref": "id1", "title": "Facets", "content": "Datasette facets can be used to add a faceted browse interface to any database table.\n With facets, tables are displayed along with a summary showing the most common values in specified columns.\n These values can be selected to further filter the table. \n Here's an example : \n \n Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://congress-legislators.datasettes.com/legislators/legislator_terms?_facet=type&_facet=party&_facet=state&_facet_size=10\", \"label\": \"an example\"}]"} {"id": "full_text_search:id1", "page": "full_text_search", "ref": "id1", "title": "Full-text search", "content": "SQLite includes a powerful mechanism for enabling full-text search against SQLite records. Datasette can detect if a table has had full-text search configured for it in the underlying database and display a search interface for filtering that table. \n Here's an example search : \n \n Datasette automatically detects which tables have been configured for full-text search.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://www.sqlite.org/fts3.html\", \"label\": \"a powerful mechanism for enabling full-text search\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/regmem/items?_search=hamper&_sort_desc=date\", \"label\": \"an example search\"}]"} {"id": "installation:id1", "page": "installation", "ref": "id1", "title": "Installation", "content": "If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch \n \n There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. \n If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . \n \n \n \n Basic installation \n \n \n Datasette Desktop for Mac \n \n \n Using Homebrew \n \n \n Using pip \n \n \n \n \n Advanced installation options \n \n \n Using pipx \n \n \n Installing plugins using pipx \n \n \n Upgrading packages using pipx \n \n \n \n \n Using Docker \n \n \n Loading SpatiaLite \n \n \n Installing plugins \n \n \n \n \n \n \n A note about extensions", "breadcrumbs": "[]", "references": "[]"} {"id": "internals:id1", "page": "internals", "ref": "id1", "title": ".get_internal_database()", "content": "Returns a database object for reading and writing to the private internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "introspection:id1", "page": "introspection", "ref": "id1", "title": "Introspection", "content": "Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. \n Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.", "breadcrumbs": "[]", "references": "[]"} {"id": "javascript_plugins:id1", "page": "javascript_plugins", "ref": "id1", "title": "JavaScript plugins", "content": "Datasette can run custom JavaScript in several different ways: \n \n \n Datasette plugins written in Python can use the extra_js_urls() or extra_body_script() plugin hooks to inject JavaScript into a page \n \n \n Datasette instances with custom templates can include additional JavaScript in those templates \n \n \n The extra_js_urls key in datasette.yaml can be used to include extra JavaScript \n \n \n There are no limitations on what this JavaScript can do. It is executed directly by the browser, so it can manipulate the DOM, fetch additional data and do anything else that JavaScript is capable of. \n \n Custom JavaScript has security implications, especially for authenticated Datasette instances where the JavaScript might run in the context of the authenticated user. It's important to carefully review any JavaScript you run in your Datasette instance.", "breadcrumbs": "[]", "references": "[]"} {"id": "json_api:id1", "page": "json_api", "ref": "id1", "title": "JSON API", "content": "Datasette provides a JSON API for your SQLite databases. Anything you can do\n through the Datasette user interface can also be accessed as JSON via the API. \n To access the API for a page, either click on the .json link on that page or\n edit the URL and add a .json extension to it.", "breadcrumbs": "[]", "references": "[]"} {"id": "metadata:id1", "page": "metadata", "ref": "id1", "title": "Metadata", "content": "Data loves metadata. Any time you run Datasette you can optionally include a\n YAML or JSON file with metadata about your databases and tables. Datasette will then\n display that information in the web UI. \n Run Datasette like this: \n datasette database1.db database2.db --metadata metadata.yaml \n Your metadata.yaml file can look something like this: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"title\": \"Custom title for your index page\",\n \"description\": \"Some description text can go here\",\n \"license\": \"ODbL\",\n \"license_url\": \"https://opendatacommons.org/licenses/odbl/\",\n \"source\": \"Original Data Source\",\n \"source_url\": \"http://example.com/\"\n}) \n ]]] \n [[[end]]] \n Choosing YAML over JSON adds support for multi-line strings and comments. \n The above metadata will be displayed on the index page of your Datasette-powered\n site. The source and license information will also be included in the footer of\n every page served by Datasette. \n Any special HTML characters in description will be escaped. If you want to\n include HTML in your description, you can use a description_html property\n instead.", "breadcrumbs": "[]", "references": "[]"} {"id": "plugin_hooks:id1", "page": "plugin_hooks", "ref": "id1", "title": "Plugin hooks", "content": "Datasette plugins use plugin hooks to customize Datasette's behavior. These hooks are powered by the pluggy plugin system. \n Each plugin can implement one or more hooks using the @hookimpl decorator against a function named that matches one of the hooks documented on this page. \n When you implement a plugin hook you can accept any or all of the parameters that are documented as being passed to that hook. \n For example, you can implement the render_cell plugin hook like this even though the full documented hook signature is render_cell(row, value, column, table, database, datasette) : \n @hookimpl\ndef render_cell(value, column):\n if column == \"stars\":\n return \"*\" * int(value) \n \n List of plugin hooks \n \n \n prepare_connection(conn, database, datasette) \n \n \n prepare_jinja2_environment(env, datasette) \n \n \n Page extras \n \n \n extra_template_vars(template, database, table, columns, view_name, request, datasette) \n \n \n extra_css_urls(template, database, table, columns, view_name, request, datasette) \n \n \n extra_js_urls(template, database, table, columns, view_name, request, datasette) \n \n \n extra_body_script(template, database, table, columns, view_name, request, datasette) \n \n \n \n \n publish_subcommand(publish) \n \n \n render_cell(row, value, column, table, database, datasette, request) \n \n \n register_output_renderer(datasette) \n \n \n register_routes(datasette) \n \n \n register_commands(cli) \n \n \n register_facet_classes() \n \n \n register_permissions(datasette) \n \n \n asgi_wrapper(datasette) \n \n \n startup(datasette) \n \n \n canned_queries(datasette, database, actor) \n \n \n actor_from_request(datasette, request) \n \n \n actors_from_ids(datasette, actor_ids) \n \n \n jinja2_environment_from_request(datasette, request, env) \n \n \n filters_from_request(request, database, table, datasette) \n \n \n permission_allowed(datasette, actor, action, resource) \n \n \n register_magic_parameters(datasette) \n \n \n forbidden(datasette, request, message) \n \n \n handle_exception(datasette, request, exception) \n \n \n skip_csrf(datasette, scope) \n \n \n get_metadata(datasette, key, database, table) \n \n \n menu_links(datasette, actor, request) \n \n \n Action hooks \n \n \n table_actions(datasette, actor, database, table, request) \n \n \n view_actions(datasette, actor, database, view, request) \n \n \n query_actions(datasette, actor, database, query_name, request, sql, params) \n \n \n row_actions(datasette, actor, request, database, table, row) \n \n \n database_actions(datasette, actor, database, request) \n \n \n homepage_actions(datasette, actor, request) \n \n \n \n \n Template slots \n \n \n top_homepage(datasette, request) \n \n \n top_database(datasette, request, database) \n \n \n top_table(datasette, request, database, table) \n \n \n top_row(datasette, request, database, table, row) \n \n \n top_query(datasette, request, database, sql) \n \n \n top_canned_query(datasette, request, database, query_name) \n \n \n \n \n Event tracking \n \n \n track_event(datasette, event) \n \n \n register_events(datasette)", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pluggy.readthedocs.io/\", \"label\": \"pluggy\"}]"} {"id": "plugins:id1", "page": "plugins", "ref": "id1", "title": "Plugins", "content": "Datasette's plugin system allows additional features to be implemented as Python\n code (or front-end JavaScript) which can be wrapped up in a separate Python\n package. The underlying mechanism uses pluggy . \n See the Datasette plugins directory for a list of existing plugins, or take a look at the\n datasette-plugin topic on GitHub. \n Things you can do with plugins include: \n \n \n Add visualizations to Datasette, for example\n datasette-cluster-map and\n datasette-vega . \n \n \n Make new custom SQL functions available for use within Datasette, for example\n datasette-haversine and\n datasette-jellyfish . \n \n \n Define custom output formats with custom extensions, for example datasette-atom and\n datasette-ics . \n \n \n Add template functions that can be called within your Jinja custom templates,\n for example datasette-render-markdown . \n \n \n Customize how database values are rendered in the Datasette interface, for example\n datasette-render-binary and\n datasette-pretty-json . \n \n \n Customize how Datasette's authentication and permissions systems work, for example datasette-auth-passwords and\n datasette-permissions-sql .", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pluggy.readthedocs.io/\", \"label\": \"pluggy\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Datasette plugins directory\"}, {\"href\": \"https://github.com/topics/datasette-plugin\", \"label\": \"datasette-plugin\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://github.com/simonw/datasette-vega\", \"label\": \"datasette-vega\"}, {\"href\": \"https://github.com/simonw/datasette-haversine\", \"label\": \"datasette-haversine\"}, {\"href\": \"https://github.com/simonw/datasette-jellyfish\", \"label\": \"datasette-jellyfish\"}, {\"href\": \"https://github.com/simonw/datasette-atom\", \"label\": \"datasette-atom\"}, {\"href\": \"https://github.com/simonw/datasette-ics\", \"label\": \"datasette-ics\"}, {\"href\": \"https://github.com/simonw/datasette-render-markdown#markdown-in-templates\", \"label\": \"datasette-render-markdown\"}, {\"href\": \"https://github.com/simonw/datasette-render-binary\", \"label\": \"datasette-render-binary\"}, {\"href\": \"https://github.com/simonw/datasette-pretty-json\", \"label\": \"datasette-pretty-json\"}, {\"href\": \"https://github.com/simonw/datasette-auth-passwords\", \"label\": \"datasette-auth-passwords\"}, {\"href\": \"https://github.com/simonw/datasette-permissions-sql\", \"label\": \"datasette-permissions-sql\"}]"} {"id": "settings:id1", "page": "settings", "ref": "id1", "title": "Settings", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "spatialite:id1", "page": "spatialite", "ref": "id1", "title": "SpatiaLite", "content": "The SpatiaLite module for SQLite adds features for handling geographic and spatial data. For an example of what you can do with it, see the tutorial Building a location to time zone API with SpatiaLite . \n To use it with Datasette, you need to install the mod_spatialite dynamic library. This can then be loaded into Datasette using the --load-extension command-line option. \n Datasette can look for SpatiaLite in common installation locations if you run it like this: \n datasette --load-extension=spatialite --setting default_allow_sql off \n If SpatiaLite is in another location, use the full path to the extension instead: \n datasette --setting default_allow_sql off \\\n --load-extension=/usr/local/lib/mod_spatialite.dylib", "breadcrumbs": "[]", "references": "[{\"href\": \"https://www.gaia-gis.it/fossil/libspatialite/index\", \"label\": \"SpatiaLite module\"}, {\"href\": \"https://datasette.io/tutorials/spatialite\", \"label\": \"Building a location to time zone API with SpatiaLite\"}]"} {"id": "sql_queries:id1", "page": "sql_queries", "ref": "id1", "title": "Canned queries", "content": "As an alternative to adding views to your database, you can define canned queries inside your datasette.yaml file. Here's an example: \n [[[cog\nfrom metadata_doc import config_example, config_example\nconfig_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"queries\": {\n \"just_species\": {\n \"sql\": \"select qSpecies from Street_Tree_List\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n Then run Datasette like this: \n datasette sf-trees.db -m metadata.json \n Each canned query will be listed on the database index page, and will also get its own URL at: \n /database-name/canned-query-name \n For the above example, that URL would be: \n /sf-trees/just_species \n You can optionally include \"title\" and \"description\" keys to show a title and description on the canned query page. As with regular table metadata you can alternatively specify \"description_html\" to have your description rendered as HTML (rather than having HTML special characters escaped).", "breadcrumbs": "[\"Running SQL queries\"]", "references": "[]"} {"id": "testing_plugins:id1", "page": "testing_plugins", "ref": "id1", "title": "Testing plugins", "content": "We recommend using pytest to write automated tests for your plugins. \n If you use the template described in Starting an installable plugin using cookiecutter your plugin will start with a single test in your tests/ directory that looks like this: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_plugin_is_installed():\n datasette = Datasette(memory=True)\n response = await datasette.client.get(\"/-/plugins.json\")\n assert response.status_code == 200\n installed_plugins = {p[\"name\"] for p in response.json()}\n assert (\n \"datasette-plugin-template-demo\"\n in installed_plugins\n ) \n This test uses the datasette.client object to exercise a test instance of Datasette. datasette.client is a wrapper around the HTTPX Python library which can imitate HTTP requests using ASGI. This is the recommended way to write tests against a Datasette instance. \n This test also uses the pytest-asyncio package to add support for async def test functions running under pytest. \n You can install these packages like so: \n pip install pytest pytest-asyncio \n If you are building an installable package you can add them as test dependencies to your setup.py module like this: \n setup(\n name=\"datasette-my-plugin\",\n # ...\n extras_require={\"test\": [\"pytest\", \"pytest-asyncio\"]},\n tests_require=[\"datasette-my-plugin[test]\"],\n) \n You can then install the test dependencies like so: \n pip install -e '.[test]' \n Then run the tests using pytest like so: \n pytest", "breadcrumbs": "[]", "references": "[{\"href\": \"https://docs.pytest.org/\", \"label\": \"pytest\"}, {\"href\": \"https://www.python-httpx.org/\", \"label\": \"HTTPX\"}, {\"href\": \"https://pypi.org/project/pytest-asyncio/\", \"label\": \"pytest-asyncio\"}]"} {"id": "writing_plugins:id1", "page": "writing_plugins", "ref": "id1", "title": "Writing plugins", "content": "You can write one-off plugins that apply to just one Datasette instance, or you can write plugins which can be installed using pip and can be shipped to the Python Package Index ( PyPI ) for other people to install. \n Want to start by looking at an example? The Datasette plugins directory lists more than 90 open source plugins with code you can explore. The plugin hooks page includes links to example plugins for each of the documented hooks.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pypi.org/\", \"label\": \"PyPI\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Datasette plugins directory\"}]"} {"id": "changelog:id10", "page": "changelog", "ref": "id10", "title": "0.63.1 (2022-11-10)", "content": "Fixed a bug where Datasette's table filter form would not redirect correctly when run behind a proxy using the base_url setting. ( #1883 ) \n \n \n SQL query is now shown wrapped in a