{"id": "contributing:contributing-upgrading-codemirror", "page": "contributing", "ref": "contributing-upgrading-codemirror", "title": "Upgrading CodeMirror", "content": "Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror: \n \n \n Install the packages with: \n npm i codemirror @codemirror/lang-sql \n \n \n Build the bundle using the version number from package.json with: \n node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js \\\n -f iife \\\n -n cm \\\n -o datasette/static/cm-editor-6.0.1.bundle.js \\\n -p @rollup/plugin-node-resolve \\\n -p @rollup/plugin-terser \n \n \n Update the version reference in the codemirror.html template.", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://codemirror.net/\", \"label\": \"CodeMirror\"}, {\"href\": \"https://latest.datasette.io/fixtures\", \"label\": \"this page\"}]"} {"id": "contributing:contributing-using-fixtures", "page": "contributing", "ref": "contributing-using-fixtures", "title": "Using fixtures", "content": "To run Datasette itself, type datasette . \n You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. \n You can create a copy of that database by running this command: \n python tests/fixtures.py fixtures.db \n Now you can run Datasette against the new fixtures database like so: \n datasette fixtures.db \n This will start a server at http://127.0.0.1:8001/ . \n Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). \n If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: \n datasette --reload fixtures.db \n You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: \n python tests/fixtures.py fixtures.db fixtures-metadata.json \n Or to output the plugins used by the tests, run this: \n python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins\nTest tables written to fixtures.db\n- metadata written to fixtures-metadata.json\nWrote plugin: fixtures-plugins/register_output_renderer.py\nWrote plugin: fixtures-plugins/view_name.py\nWrote plugin: fixtures-plugins/my_plugin.py\nWrote plugin: fixtures-plugins/messages_output_renderer.py\nWrote plugin: fixtures-plugins/my_plugin_2.py \n Then run Datasette like this: \n datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/", "breadcrumbs": "[\"Contributing\"]", "references": "[]"} {"id": "contributing:devenvironment", "page": "contributing", "ref": "devenvironment", "title": "Setting up a development environment", "content": "If you have Python 3.8 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps. \n If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account. \n Now clone that repository somewhere on your computer: \n git clone git@github.com:YOURNAME/datasette \n If you want to get started without creating your own fork, you can do this instead: \n git clone git@github.com:simonw/datasette \n The next step is to create a virtual environment for your project and use it to install Datasette's dependencies: \n cd datasette\n# Create a virtual environment in ./venv\npython3 -m venv ./venv\n# Now activate the virtual environment, so pip can install into it\nsource venv/bin/activate\n# Install Datasette and its testing dependencies\npython3 -m pip install -e '.[test]' \n That last line does most of the work: pip install -e means \"install this package in a way that allows me to edit the source code in place\". The .[test] option means \"use the setup.py in this directory and install the optional testing dependencies as well\".", "breadcrumbs": "[\"Contributing\"]", "references": "[{\"href\": \"https://docs.python-guide.org/starting/install3/osx/\", \"label\": \"is using homebrew\"}, {\"href\": \"https://github.com/simonw/datasette/fork\", \"label\": \"create a fork of datasette\"}]"} {"id": "contributing:general-guidelines", "page": "contributing", "ref": "general-guidelines", "title": "General guidelines", "content": "main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. \n \n \n The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. \n \n \n New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them.", "breadcrumbs": "[\"Contributing\"]", "references": "[]"} {"id": "contributing:id1", "page": "contributing", "ref": "id1", "title": "Contributing", "content": "Datasette is an open source project. We welcome contributions! \n This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins .", "breadcrumbs": "[]", "references": "[]"} {"id": "csv_export:csv-export-url-parameters", "page": "csv_export", "ref": "csv-export-url-parameters", "title": "URL parameters", "content": "The following options can be used to customize the CSVs returned by Datasette. \n \n \n ?_header=off \n \n This removes the first row of the CSV file specifying the headings - only the row data will be returned. \n \n \n \n ?_stream=on \n \n Stream all matching records, not just the first page of results. See below. \n \n \n \n ?_dl=on \n \n Causes Datasette to return a content-disposition: attachment; filename=\"filename.csv\" header.", "breadcrumbs": "[\"CSV export\"]", "references": "[]"} {"id": "csv_export:id1", "page": "csv_export", "ref": "id1", "title": "CSV export", "content": "Any Datasette table, view or custom SQL query can be exported as CSV. \n To obtain the CSV representation of the table you are looking, click the \"this\n data as CSV\" link. \n You can also use the advanced export form for more control over the resulting\n file, which looks like this and has the following options: \n \n \n \n download file - instead of displaying CSV in your browser, this forces\n your browser to download the CSV to your downloads directory. \n \n \n expand labels - if your table has any foreign key references this option\n will cause the CSV to gain additional COLUMN_NAME_label columns with a\n label for each foreign key derived from the linked table. In this example \n the city_id column is accompanied by a city_id_label column. \n \n \n stream all rows - by default CSV files only contain the first\n max_returned_rows records. This option will cause Datasette to\n loop through every matching record and return them as a single CSV file. \n \n \n You can try that out on https://latest.datasette.io/fixtures/facetable?_size=4", "breadcrumbs": "[]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/facetable.csv?_labels=on&_size=max\", \"label\": \"In this example\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable?_size=4\", \"label\": \"https://latest.datasette.io/fixtures/facetable?_size=4\"}]"} {"id": "csv_export:streaming-all-records", "page": "csv_export", "ref": "streaming-all-records", "title": "Streaming all records", "content": "The stream all rows option is designed to be as efficient as possible -\n under the hood it takes advantage of Python 3 asyncio capabilities and\n Datasette's efficient pagination to stream back the full\n CSV file. \n Since databases can get pretty large, by default this option is capped at 100MB -\n if a table returns more than 100MB of data the last line of the CSV will be a\n truncation error message. \n You can increase or remove this limit using the max_csv_mb config\n setting. You can also disable the CSV export feature entirely using\n allow_csv_stream .", "breadcrumbs": "[\"CSV export\"]", "references": "[]"} {"id": "custom_templates:css-classes-on-the-body", "page": "custom_templates", "ref": "css-classes-on-the-body", "title": "CSS classes on the ", "content": "Every default template includes CSS classes in the body designed to support\n custom styling. \n The index template (the top level page at / ) gets this: \n \n The database template ( /dbname ) gets this: \n \n The custom SQL template ( /dbname?sql=... ) gets this: \n \n A canned query template ( /dbname/queryname ) gets this: \n \n The table template ( /dbname/tablename ) gets: \n \n The row template ( /dbname/tablename/rowid ) gets: \n \n The db-x and table-x classes use the database or table names themselves if\n they are valid CSS identifiers. If they aren't, we strip any invalid\n characters out and append a 6 character md5 digest of the original name, in\n order to ensure that multiple tables which resolve to the same stripped\n character version still have different CSS classes. \n Some examples: \n \"simple\" => \"simple\"\n\"MixedCase\" => \"MixedCase\"\n\"-no-leading-hyphens\" => \"no-leading-hyphens-65bea6\"\n\"_no-leading-underscores\" => \"no-leading-underscores-b921bc\"\n\"no spaces\" => \"no-spaces-7088d7\"\n\"-\" => \"336d5e\"\n\"no $ characters\" => \"no--characters-59e024\" \n and elements also get custom CSS classes reflecting the\n database column they are representing, for example: \n \n \n \n \n \n \n \n \n \n \n \n \n \n
idname
1SMITH
", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"} {"id": "custom_templates:custom-pages-404", "page": "custom_templates", "ref": "custom-pages-404", "title": "Returning 404s", "content": "To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function: \n {% if not rows %}\n {{ raise_404(\"Content not found\") }}\n{% endif %} \n If you call raise_404() the other content in your template will be ignored.", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"} {"id": "custom_templates:custom-pages-errors", "page": "custom_templates", "ref": "custom-pages-errors", "title": "Custom error pages", "content": "Datasette returns an error page if an unexpected error occurs, access is forbidden or content cannot be found. \n You can customize the response returned for these errors by providing a custom error page template. \n Content not found errors use a 404.html template. Access denied errors use 403.html . Invalid input errors use 400.html . Unexpected errors of other kinds use 500.html . \n If a template for the specific error code is not found a template called error.html will be used instead. If you do not provide that template Datasette's default error.html template will be used. \n The error template will be passed the following context: \n \n \n status - integer \n \n The integer HTTP status code, e.g. 404, 500, 403, 400. \n \n \n \n error - string \n \n Details of the specific error, usually a full sentence. \n \n \n \n title - string or None \n \n A title for the page representing the class of error. This is often None for errors that do not provide a title separate from their error message.", "breadcrumbs": "[\"Custom pages and templates\", \"Custom redirects\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/blob/main/datasette/templates/error.html\", \"label\": \"default error.html template\"}]"} {"id": "custom_templates:custom-pages-headers", "page": "custom_templates", "ref": "custom-pages-headers", "title": "Custom headers and status codes", "content": "Custom pages default to being served with a content-type of text/html; charset=utf-8 and a 200 status code. You can change these by calling a custom function from within your template. \n For example, to serve a custom page with a 418 I'm a teapot HTTP status code, create a file in pages/teapot.html containing the following: \n {{ custom_status(418) }}\n\nTeapot\n\nI'm a teapot\n\n \n To serve a custom HTTP header, add a custom_header(name, value) function call. For example: \n {{ custom_status(418) }}\n{{ custom_header(\"x-teapot\", \"I am\") }}\n\nTeapot\n\nI'm a teapot\n\n \n You can verify this is working using curl like this: \n curl -I 'http://127.0.0.1:8001/teapot'\nHTTP/1.1 418\ndate: Sun, 26 Apr 2020 18:38:30 GMT\nserver: uvicorn\nx-teapot: I am\ncontent-type: text/html; charset=utf-8", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"} {"id": "custom_templates:custom-pages-parameters", "page": "custom_templates", "ref": "custom-pages-parameters", "title": "Path parameters for pages", "content": "You can define custom pages that match multiple paths by creating files with {variable} definitions in their filenames. \n For example, to capture any request to a URL matching /about/* , you would create a template in the following location: \n templates/pages/about/{slug}.html \n A hit to /about/news would render that template and pass in a variable called slug with a value of \"news\" . \n If you use this mechanism don't forget to return a 404 if the referenced content could not be found. You can do this using {{ raise_404() }} described below. \n Templates defined using custom page routes work particularly well with the sql() template function from datasette-template-sql or the graphql() template function from datasette-graphql .", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette-template-sql\", \"label\": \"datasette-template-sql\"}, {\"href\": \"https://github.com/simonw/datasette-graphql#the-graphql-template-function\", \"label\": \"datasette-graphql\"}]"} {"id": "custom_templates:custom-pages-redirects", "page": "custom_templates", "ref": "custom-pages-redirects", "title": "Custom redirects", "content": "You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html : \n {{ custom_redirect(\"https://github.com/simonw/datasette\") }} \n Now requests to http://localhost:8001/datasette will result in a redirect. \n These redirects are served with a 302 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function: \n {{ custom_redirect(\"https://github.com/simonw/datasette\", 301) }}", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"} {"id": "custom_templates:customization", "page": "custom_templates", "ref": "customization", "title": "Custom pages and templates", "content": "Datasette provides a number of ways of customizing the way data is displayed.", "breadcrumbs": "[]", "references": "[]"} {"id": "custom_templates:customization-custom-templates", "page": "custom_templates", "ref": "customization-custom-templates", "title": "Custom templates", "content": "By default, Datasette uses default templates that ship with the package. \n You can over-ride these templates by specifying a custom --template-dir like\n this: \n datasette mydb.db --template-dir=mytemplates/ \n Datasette will now first look for templates in that directory, and fall back on\n the defaults if no matches are found. \n It is also possible to over-ride templates on a per-database, per-row or per-\n table basis. \n The lookup rules Datasette uses are as follows: \n Index page (/):\n index.html\n\nDatabase page (/mydatabase):\n database-mydatabase.html\n database.html\n\nCustom query page (/mydatabase?sql=...):\n query-mydatabase.html\n query.html\n\nCanned query page (/mydatabase/canned-query):\n query-mydatabase-canned-query.html\n query-mydatabase.html\n query.html\n\nTable page (/mydatabase/mytable):\n table-mydatabase-mytable.html\n table.html\n\nRow page (/mydatabase/mytable/id):\n row-mydatabase-mytable.html\n row.html\n\nTable of rows and columns include on table page:\n _table-table-mydatabase-mytable.html\n _table-mydatabase-mytable.html\n _table.html\n\nTable of rows and columns include on row page:\n _table-row-mydatabase-mytable.html\n _table-mydatabase-mytable.html\n _table.html \n If a table name has spaces or other unexpected characters in it, the template\n filename will follow the same rules as our custom CSS classes - for\n example, a table called \"Food Trucks\" will attempt to load the following\n templates: \n table-mydatabase-Food-Trucks-399138.html\ntable.html \n You can find out which templates were considered for a specific page by viewing\n source on that page and looking for an HTML comment at the bottom. The comment\n will look something like this: \n \n This example is from the canned query page for a query called \"tz\" in the\n database called \"mydb\". The asterisk shows which template was selected - so in\n this case, Datasette found a template file called query-mydb-tz.html and\n used that - but if that template had not been found, it would have tried for\n query-mydb.html or the default query.html . \n It is possible to extend the default templates using Jinja template\n inheritance. If you want to customize EVERY row template with some additional\n content you can do so by creating a row.html template like this: \n {% extends \"default:row.html\" %}\n\n{% block content %}\n

EXTRA HTML AT THE TOP OF THE CONTENT BLOCK

\n

This line renders the original block:

\n{{ super() }}\n{% endblock %} \n Note the default:row.html template name, which ensures Jinja will inherit\n from the default template. \n The _table.html template is included by both the row and the table pages,\n and a list of rows. The default _table.html template renders them as an\n HTML template and can be seen here . \n You can provide a custom template that applies to all of your databases and\n tables, or you can provide custom templates for specific tables using the\n template naming scheme described above. \n If you want to present your data in a format other than an HTML table, you\n can do so by looping through display_rows in your own _table.html \n template. You can use {{ row[\"column_name\"] }} to output the raw value\n of a specific column. \n If you want to output the rendered HTML version of a column, including any\n links to foreign keys, you can use {{ row.display(\"column_name\") }} . \n Here is an example of a custom _table.html template: \n {% for row in display_rows %}\n
\n

{{ row[\"title\"] }}

\n

{{ row[\"description\"] }}\n

Category: {{ row.display(\"category_id\") }}

\n
\n{% endfor %}", "breadcrumbs": "[\"Custom pages and templates\", \"Publishing static assets\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/blob/main/datasette/templates/_table.html\", \"label\": \"can be seen here\"}]"} {"id": "custom_templates:customization-static-files", "page": "custom_templates", "ref": "customization-static-files", "title": "Serving static files", "content": "Datasette can serve static files for you, using the --static option.\n Consider the following directory structure: \n metadata.json\nstatic-files/styles.css\nstatic-files/app.js \n You can start Datasette using --static assets:static-files/ to serve those\n files from the /assets/ mount point: \n datasette --config datasette.yaml --static assets:static-files/ --memory \n The following URLs will now serve the content from those CSS and JS files: \n http://localhost:8001/assets/styles.css\nhttp://localhost:8001/assets/app.js \n You can reference those files from datasette.yaml like this, see custom CSS and JavaScript for more details: \n [[[cog\nfrom metadata_doc import config_example\nconfig_example(cog, \"\"\"\n extra_css_urls:\n - /assets/styles.css\n extra_js_urls:\n - /assets/app.js\n\"\"\") \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"} {"id": "custom_templates:id1", "page": "custom_templates", "ref": "id1", "title": "Custom pages", "content": "You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory. \n For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this: \n datasette mydb.db --template-dir=templates/ \n You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html .", "breadcrumbs": "[\"Custom pages and templates\", \"Publishing static assets\"]", "references": "[]"} {"id": "custom_templates:publishing-static-assets", "page": "custom_templates", "ref": "publishing-static-assets", "title": "Publishing static assets", "content": "The datasette publish command can be used to publish your static assets,\n using the same syntax as above: \n datasette publish cloudrun mydb.db --static assets:static-files/ \n This will upload the contents of the static-files/ directory as part of the\n deployment, and configure Datasette to correctly serve the assets from /assets/ .", "breadcrumbs": "[\"Custom pages and templates\"]", "references": "[]"} {"id": "deploying:apache-proxy-configuration", "page": "deploying", "ref": "apache-proxy-configuration", "title": "Apache proxy configuration", "content": "For Apache , you can use the ProxyPass directive. First make sure the following lines are uncommented: \n LoadModule proxy_module lib/httpd/modules/mod_proxy.so\nLoadModule proxy_http_module lib/httpd/modules/mod_proxy_http.so \n Then add these directives to proxy traffic: \n ProxyPass /my-datasette/ http://127.0.0.1:8009/my-datasette/\nProxyPreserveHost On \n A live demo of Datasette running behind Apache using this proxy setup can be seen at datasette-apache-proxy-demo.datasette.io/prefix/ . The code for that demo can be found in the demos/apache-proxy directory. \n Using --uds you can use Unix domain sockets similar to the nginx example: \n ProxyPass /my-datasette/ unix:/tmp/datasette.sock|http://localhost/my-datasette/ \n The ProxyPreserveHost On directive ensures that the original Host: header from the incoming request is passed through to Datasette. Datasette needs this to correctly assemble links to other pages using the .absolute_url(request, path) method.", "breadcrumbs": "[\"Deploying Datasette\", \"Running Datasette behind a proxy\"]", "references": "[{\"href\": \"https://httpd.apache.org/\", \"label\": \"Apache\"}, {\"href\": \"https://datasette-apache-proxy-demo.datasette.io/prefix/\", \"label\": \"datasette-apache-proxy-demo.datasette.io/prefix/\"}, {\"href\": \"https://github.com/simonw/datasette/tree/main/demos/apache-proxy\", \"label\": \"demos/apache-proxy\"}, {\"href\": \"https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost\", \"label\": \"ProxyPreserveHost On\"}]"} {"id": "deploying:deploying", "page": "deploying", "ref": "deploying", "title": "Deploying Datasette", "content": "The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. \n You can deploy Datasette to other hosting providers using the instructions on this page.", "breadcrumbs": "[]", "references": "[]"} {"id": "deploying:deploying-buildpacks", "page": "deploying", "ref": "deploying-buildpacks", "title": "Deploying using buildpacks", "content": "Some hosting providers such as Heroku , DigitalOcean App Platform and Scalingo support the Buildpacks standard for deploying Python web applications. \n Deploying Datasette on these platforms requires two files: requirements.txt and Procfile . \n The requirements.txt file lets the platform know which Python packages should be installed. It should contain datasette at a minimum, but can also list any Datasette plugins you wish to install - for example: \n datasette\ndatasette-vega \n The Procfile lets the hosting platform know how to run the command that serves web traffic. It should look like this: \n web: datasette . -h 0.0.0.0 -p $PORT --cors \n The $PORT environment variable is provided by the hosting platform. --cors enables CORS requests from JavaScript running on other websites to your domain - omit this if you don't want to allow CORS. You can add additional Datasette Settings options here too. \n These two files should be enough to deploy Datasette on any host that supports buildpacks. Datasette will serve any SQLite files that are included in the root directory of the application. \n If you want to build SQLite files or download them as part of the deployment process you can do so using a bin/post_compile file. For example, the following bin/post_compile will download an example database that will then be served by Datasette: \n wget https://fivethirtyeight.datasettes.com/fivethirtyeight.db \n simonw/buildpack-datasette-demo is an example GitHub repository showing a Datasette configuration that can be deployed to a buildpack-supporting host.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[{\"href\": \"https://www.heroku.com/\", \"label\": \"Heroku\"}, {\"href\": \"https://www.digitalocean.com/docs/app-platform/\", \"label\": \"DigitalOcean App Platform\"}, {\"href\": \"https://scalingo.com/\", \"label\": \"Scalingo\"}, {\"href\": \"https://buildpacks.io/\", \"label\": \"Buildpacks standard\"}, {\"href\": \"https://github.com/simonw/buildpack-datasette-demo\", \"label\": \"simonw/buildpack-datasette-demo\"}]"} {"id": "deploying:deploying-fundamentals", "page": "deploying", "ref": "deploying-fundamentals", "title": "Deployment fundamentals", "content": "Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000. \n If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:deploying-openrc", "page": "deploying", "ref": "deploying-openrc", "title": "Running Datasette using OpenRC", "content": "OpenRC is the service manager on non-systemd Linux distributions like Alpine Linux and Gentoo . \n Create an init script at /etc/init.d/datasette with the following contents: \n #!/sbin/openrc-run\n\nname=\"datasette\"\ncommand=\"datasette\"\ncommand_args=\"serve -h 0.0.0.0 /path/to/db.db\"\ncommand_background=true\npidfile=\"/run/${RC_SVCNAME}.pid\" \n You then need to configure the service to run at boot and start it: \n rc-update add datasette\nrc-service datasette start", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[{\"href\": \"https://www.alpinelinux.org/\", \"label\": \"Alpine Linux\"}, {\"href\": \"https://www.gentoo.org/\", \"label\": \"Gentoo\"}]"} {"id": "deploying:deploying-proxy", "page": "deploying", "ref": "deploying-proxy", "title": "Running Datasette behind a proxy", "content": "You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site. \n You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this: \n datasette my-database.db --setting base_url /my-datasette/ -p 8009 \n This will run Datasette with the following URLs: \n \n \n http://127.0.0.1:8009/my-datasette/ - the Datasette homepage \n \n \n http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database \n \n \n http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table \n \n \n You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance.", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:deploying-systemd", "page": "deploying", "ref": "deploying-systemd", "title": "Running Datasette using systemd", "content": "You can run Datasette on Ubuntu or Debian systems using systemd . \n First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip . \n You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette . \n Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root . \n You can copy a test database into that folder like so: \n cd /home/ubuntu/datasette-root\ncurl -O https://latest.datasette.io/fixtures.db \n Create a file at /etc/systemd/system/datasette.service with the following contents: \n [Unit]\nDescription=Datasette\nAfter=network.target\n\n[Service]\nType=simple\nUser=ubuntu\nEnvironment=DATASETTE_SECRET=\nWorkingDirectory=/home/ubuntu/datasette-root\nExecStart=datasette serve . -h 127.0.0.1 -p 8000\nRestart=on-failure\n\n[Install]\nWantedBy=multi-user.target \n Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so: \n python3 -c 'import secrets; print(secrets.token_hex(32))' \n This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details. \n You can start the Datasette process running using the following: \n sudo systemctl daemon-reload\nsudo systemctl start datasette.service \n You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using: \n sudo systemctl restart datasette.service \n Once the service has started you can confirm that Datasette is running on port 8000 like so: \n curl 127.0.0.1:8000/-/versions.json\n# Should output JSON showing the installed version \n Datasette will not be accessible from outside the server because it is listening on 127.0.0.1 . You can expose it by instead listening on 0.0.0.0 , but a better way is to set up a proxy such as nginx - see Running Datasette behind a proxy .", "breadcrumbs": "[\"Deploying Datasette\"]", "references": "[]"} {"id": "deploying:nginx-proxy-configuration", "page": "deploying", "ref": "nginx-proxy-configuration", "title": "Nginx proxy configuration", "content": "Here is an example of an nginx configuration file that will proxy traffic to Datasette: \n daemon off;\n\nevents {\n worker_connections 1024;\n}\nhttp {\n server {\n listen 80;\n location /my-datasette {\n proxy_pass http://127.0.0.1:8009/my-datasette;\n proxy_set_header Host $host;\n }\n }\n} \n You can also use the --uds option to Datasette to listen on a Unix domain socket instead of a port, configuring the nginx upstream proxy like this: \n daemon off;\nevents {\n worker_connections 1024;\n}\nhttp {\n server {\n listen 80;\n location /my-datasette {\n proxy_pass http://datasette/my-datasette;\n proxy_set_header Host $host;\n }\n }\n upstream datasette {\n server unix:/tmp/datasette.sock;\n }\n} \n Then run Datasette with datasette --uds /tmp/datasette.sock path/to/database.db --setting base_url /my-datasette/ .", "breadcrumbs": "[\"Deploying Datasette\", \"Running Datasette behind a proxy\"]", "references": "[{\"href\": \"https://nginx.org/\", \"label\": \"nginx\"}]"} {"id": "ecosystem:dogsheep", "page": "ecosystem", "ref": "dogsheep", "title": "Dogsheep", "content": "Dogsheep is a collection of tools for personal analytics using SQLite and Datasette. The project provides tools like github-to-sqlite and twitter-to-sqlite that can import data from different sources in order to create a personal data warehouse. Personal Data Warehouses: Reclaiming Your Data is a talk that explains Dogsheep and demonstrates it in action.", "breadcrumbs": "[\"The Datasette Ecosystem\"]", "references": "[{\"href\": \"https://dogsheep.github.io/\", \"label\": \"Dogsheep\"}, {\"href\": \"https://datasette.io/tools/github-to-sqlite\", \"label\": \"github-to-sqlite\"}, {\"href\": \"https://datasette.io/tools/twitter-to-sqlite\", \"label\": \"twitter-to-sqlite\"}, {\"href\": \"https://simonwillison.net/2020/Nov/14/personal-data-warehouses/\", \"label\": \"Personal Data Warehouses: Reclaiming Your Data\"}]"} {"id": "ecosystem:ecosystem", "page": "ecosystem", "ref": "ecosystem", "title": "The Datasette Ecosystem", "content": "Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data. \n These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality. \n The Datasette project website includes a directory of plugins and a directory of tools: \n \n \n Plugins directory on datasette.io \n \n \n Tools directory on datasette.io", "breadcrumbs": "[]", "references": "[{\"href\": \"https://datasette.io/\", \"label\": \"Datasette project website\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Plugins directory on datasette.io\"}, {\"href\": \"https://datasette.io/tools\", \"label\": \"Tools directory on datasette.io\"}]"} {"id": "ecosystem:sqlite-utils", "page": "ecosystem", "ref": "sqlite-utils", "title": "sqlite-utils", "content": "sqlite-utils is a key building block for the wider Datasette ecosystem. It provides a collection of utilities for manipulating SQLite databases, both as a Python library and a command-line utility. Features include: \n \n \n Insert data into a SQLite database from JSON, CSV or TSV, automatically creating tables with the correct schema or altering existing tables to add missing columns. \n \n \n Configure tables for use with SQLite full-text search, including creating triggers needed to keep the search index up-to-date. \n \n \n Modify tables in ways that are not supported by SQLite's default ALTER TABLE syntax - for example changing the types of columns or selecting a new primary key for a table. \n \n \n Adding foreign keys to existing database tables. \n \n \n Extracting columns of data into a separate lookup table.", "breadcrumbs": "[\"The Datasette Ecosystem\"]", "references": "[{\"href\": \"https://sqlite-utils.datasette.io/\", \"label\": \"sqlite-utils\"}]"} {"id": "events:id1", "page": "events", "ref": "id1", "title": "Events", "content": "Datasette includes a mechanism for tracking events that occur while the software is running. This is primarily intended to be used by plugins, which can both trigger events and listen for events. \n The core Datasette application triggers events when certain things happen. This page describes those events. \n Plugins can listen for events using the track_event(datasette, event) plugin hook, which will be called with instances of the following classes - or additional classes registered by other plugins . \n \n \n \n \n class datasette.events. LoginEvent actor : dict | None \n \n Event name: login \n A user (represented by event.actor ) has logged in. \n \n \n \n \n class datasette.events. LogoutEvent actor : dict | None \n \n Event name: logout \n A user (represented by event.actor ) has logged out. \n \n \n \n \n class datasette.events. CreateTokenEvent actor : dict | None expires_after : int | None restrict_all : list restrict_database : dict restrict_resource : dict \n \n Event name: create-token \n A user created an API token. \n \n \n Variables \n \n \n \n expires_after -- Number of seconds after which this token will expire. \n \n \n restrict_all -- Restricted permissions for this token. \n \n \n restrict_database -- Restricted database permissions for this token. \n \n \n restrict_resource -- Restricted resource permissions for this token. \n \n \n \n \n \n \n \n \n \n class datasette.events. CreateTableEvent actor : dict | None database : str table : str schema : str \n \n Event name: create-table \n A new table has been created in the database. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was created. \n \n \n table -- The name of the table that was created \n \n \n schema -- The SQL schema definition for the new table. \n \n \n \n \n \n \n \n \n \n class datasette.events. DropTableEvent actor : dict | None database : str table : str \n \n Event name: drop-table \n A table has been dropped from the database. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was dropped. \n \n \n table -- The name of the table that was dropped \n \n \n \n \n \n \n \n \n \n class datasette.events. AlterTableEvent actor : dict | None database : str table : str before_schema : str after_schema : str \n \n Event name: alter-table \n A table has been altered. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was altered \n \n \n table -- The name of the table that was altered \n \n \n before_schema -- The table's SQL schema before the alteration \n \n \n after_schema -- The table's SQL schema after the alteration \n \n \n \n \n \n \n \n \n \n class datasette.events. InsertRowsEvent actor : dict | None database : str table : str num_rows : int ignore : bool replace : bool \n \n Event name: insert-rows \n Rows were inserted into a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the rows were inserted. \n \n \n table -- The name of the table where the rows were inserted. \n \n \n num_rows -- The number of rows that were requested to be inserted. \n \n \n ignore -- Was ignore set? \n \n \n replace -- Was replace set? \n \n \n \n \n \n \n \n \n \n class datasette.events. UpsertRowsEvent actor : dict | None database : str table : str num_rows : int \n \n Event name: upsert-rows \n Rows were upserted into a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the rows were inserted. \n \n \n table -- The name of the table where the rows were inserted. \n \n \n num_rows -- The number of rows that were requested to be inserted. \n \n \n \n \n \n \n \n \n \n class datasette.events. UpdateRowEvent actor : dict | None database : str table : str pks : list \n \n Event name: update-row \n A row was updated in a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the row was updated. \n \n \n table -- The name of the table where the row was updated. \n \n \n pks -- The primary key values of the updated row. \n \n \n \n \n \n \n \n \n \n class datasette.events. DeleteRowEvent actor : dict | None database : str table : str pks : list \n \n Event name: delete-row \n A row was deleted from a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the row was deleted. \n \n \n table -- The name of the table where the row was deleted. \n \n \n pks -- The primary key values of the deleted row.", "breadcrumbs": "[]", "references": "[]"} {"id": "facets:facets-in-query-strings", "page": "facets", "ref": "facets-in-query-strings", "title": "Facets in query strings", "content": "To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL.\n For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this: \n /dbname/tablename?_facet=state&_facet=city_id \n This works for both the HTML interface and the .json view.\n When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this: \n {\n \"state\": {\n \"name\": \"state\",\n \"results\": [\n {\n \"value\": \"CA\",\n \"label\": \"CA\",\n \"count\": 10,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=CA\",\n \"selected\": false\n },\n {\n \"value\": \"MI\",\n \"label\": \"MI\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=MI\",\n \"selected\": false\n },\n {\n \"value\": \"MC\",\n \"label\": \"MC\",\n \"count\": 1,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&state=MC\",\n \"selected\": false\n }\n ],\n \"truncated\": false\n }\n \"city_id\": {\n \"name\": \"city_id\",\n \"results\": [\n {\n \"value\": 1,\n \"label\": \"San Francisco\",\n \"count\": 6,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=1\",\n \"selected\": false\n },\n {\n \"value\": 2,\n \"label\": \"Los Angeles\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=2\",\n \"selected\": false\n },\n {\n \"value\": 3,\n \"label\": \"Detroit\",\n \"count\": 4,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=3\",\n \"selected\": false\n },\n {\n \"value\": 4,\n \"label\": \"Memnonia\",\n \"count\": 1,\n \"toggle_url\": \"http://...?_facet=city_id&_facet=state&city_id=4\",\n \"selected\": false\n }\n ],\n \"truncated\": false\n }\n} \n If Datasette detects that a column is a foreign key, the \"label\" property will be automatically derived from the detected label column on the referenced table. \n The default number of facet results returned is 30, controlled by the default_facet_size setting.\n You can increase this on an individual page by adding ?_facet_size=100 to the query string, up to a maximum of max_returned_rows (which defaults to 1000).", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "facets:facets-metadata", "page": "facets", "ref": "facets-metadata", "title": "Facets in metadata", "content": "You can turn facets on by default for specific tables by adding them to a \"facets\" key in a Datasette Metadata file. \n Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"]\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]] \n Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view. \n You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this: \n [[[cog\nmetadata_example(cog, {\n \"facets\": [\n {\"array\": \"tags\"},\n {\"date\": \"created\"}\n ]\n}) \n ]]] \n [[[end]]] \n You can change the default facet size (the number of results shown for each facet) for a table using facet_size : \n [[[cog\nmetadata_example(cog, {\n \"databases\": {\n \"sf-trees\": {\n \"tables\": {\n \"Street_Tree_List\": {\n \"facets\": [\"qLegalStatus\"],\n \"facet_size\": 10\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "facets:id1", "page": "facets", "ref": "id1", "title": "Facets", "content": "Datasette facets can be used to add a faceted browse interface to any database table.\n With facets, tables are displayed along with a summary showing the most common values in specified columns.\n These values can be selected to further filter the table. \n Here's an example : \n \n Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://congress-legislators.datasettes.com/legislators/legislator_terms?_facet=type&_facet=party&_facet=state&_facet_size=10\", \"label\": \"an example\"}]"} {"id": "facets:id2", "page": "facets", "ref": "id2", "title": "Facet by JSON array", "content": "If your SQLite installation provides the json1 extension (you can check using /-/versions ) Datasette will automatically detect columns that contain JSON arrays of values and offer a faceting interface against those columns. \n This is useful for modelling things like tags without needing to break them out into a new table. \n Example here: latest.datasette.io/fixtures/facetable?_facet_array=tags", "breadcrumbs": "[\"Facets\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/facetable?_facet_array=tags\", \"label\": \"latest.datasette.io/fixtures/facetable?_facet_array=tags\"}]"} {"id": "facets:id3", "page": "facets", "ref": "id3", "title": "Facet by date", "content": "If Datasette finds any columns that contain dates in the first 100 values, it will offer a faceting interface against the dates of those values.\n This works especially well against timestamp values such as 2019-03-01 12:44:00 . \n Example here: latest.datasette.io/fixtures/facetable?_facet_date=created", "breadcrumbs": "[\"Facets\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/facetable?_facet_date=created\", \"label\": \"latest.datasette.io/fixtures/facetable?_facet_date=created\"}]"} {"id": "facets:speeding-up-facets-with-indexes", "page": "facets", "ref": "speeding-up-facets-with-indexes", "title": "Speeding up facets with indexes", "content": "The performance of facets can be greatly improved by adding indexes on the columns you wish to facet by.\n Adding indexes can be performed using the sqlite3 command-line utility. Here's how to add an index on the state column in a table called Food_Trucks : \n sqlite3 mydatabase.db \n SQLite version 3.19.3 2017-06-27 16:48:08\nEnter \".help\" for usage hints.\nsqlite> CREATE INDEX Food_Trucks_state ON Food_Trucks(\"state\"); \n Or using the sqlite-utils command-line utility: \n sqlite-utils create-index mydatabase.db Food_Trucks state", "breadcrumbs": "[\"Facets\"]", "references": "[{\"href\": \"https://sqlite-utils.datasette.io/en/stable/cli.html#creating-indexes\", \"label\": \"sqlite-utils\"}]"} {"id": "facets:suggested-facets", "page": "facets", "ref": "suggested-facets", "title": "Suggested facets", "content": "Datasette's table UI will suggest facets for the user to apply, based on the following criteria: \n For the currently filtered data are there any columns which, if applied as a facet... \n \n \n Will return 30 or less unique options \n \n \n Will return more than one unique option \n \n \n Will return less unique options than the total number of filtered rows \n \n \n And the query used to evaluate this criteria can be completed in under 50ms \n \n \n That last point is particularly important: Datasette runs a query for every column that is displayed on a page, which could get expensive - so to avoid slow load times it sets a time limit of just 50ms for each of those queries.\n This means suggested facets are unlikely to appear for tables with millions of records in them.", "breadcrumbs": "[\"Facets\"]", "references": "[]"} {"id": "full_text_search:configuring-fts-by-hand", "page": "full_text_search", "ref": "configuring-fts-by-hand", "title": "Configuring FTS by hand", "content": "We recommend using sqlite-utils , but if you want to hand-roll a SQLite full-text search table you can do so using the following SQL. \n To enable full-text search for a table called items that works against the name and description columns, you would run this SQL to create a new items_fts FTS virtual table: \n CREATE VIRTUAL TABLE \"items_fts\" USING FTS4 (\n name,\n description,\n content=\"items\"\n); \n This creates a set of tables to power full-text search against items . The new items_fts table will be detected by Datasette as the fts_table for the items table. \n Creating the table is not enough: you also need to populate it with a copy of the data that you wish to make searchable. You can do that using the following SQL: \n INSERT INTO \"items_fts\" (rowid, name, description)\n SELECT rowid, name, description FROM items; \n If your table has columns that are foreign key references to other tables you can include that data in your full-text search index using a join. Imagine the items table has a foreign key column called category_id which refers to a categories table - you could create a full-text search table like this: \n CREATE VIRTUAL TABLE \"items_fts\" USING FTS4 (\n name,\n description,\n category_name,\n content=\"items\"\n); \n And then populate it like this: \n INSERT INTO \"items_fts\" (rowid, name, description, category_name)\n SELECT items.rowid,\n items.name,\n items.description,\n categories.name\n FROM items JOIN categories ON items.category_id=categories.id; \n You can use this technique to populate the full-text search index from any combination of tables and joins that makes sense for your project.", "breadcrumbs": "[\"Full-text search\", \"Enabling full-text search for a SQLite table\"]", "references": "[{\"href\": \"https://sqlite-utils.datasette.io/\", \"label\": \"sqlite-utils\"}]"} {"id": "full_text_search:configuring-fts-using-csvs-to-sqlite", "page": "full_text_search", "ref": "configuring-fts-using-csvs-to-sqlite", "title": "Configuring FTS using csvs-to-sqlite", "content": "If your data starts out in CSV files, you can use Datasette's companion tool csvs-to-sqlite to convert that file into a SQLite database and enable full-text search on specific columns. For a file called items.csv where you want full-text search to operate against the name and description columns you would run the following: \n csvs-to-sqlite items.csv items.db -f name -f description", "breadcrumbs": "[\"Full-text search\", \"Enabling full-text search for a SQLite table\"]", "references": "[{\"href\": \"https://github.com/simonw/csvs-to-sqlite\", \"label\": \"csvs-to-sqlite\"}]"} {"id": "full_text_search:configuring-fts-using-sqlite-utils", "page": "full_text_search", "ref": "configuring-fts-using-sqlite-utils", "title": "Configuring FTS using sqlite-utils", "content": "sqlite-utils is a CLI utility and Python library for manipulating SQLite databases. You can use it from Python code to configure FTS search, or you can achieve the same goal using the accompanying command-line tool . \n Here's how to use sqlite-utils to enable full-text search for an items table across the name and description columns: \n sqlite-utils enable-fts mydatabase.db items name description", "breadcrumbs": "[\"Full-text search\", \"Enabling full-text search for a SQLite table\"]", "references": "[{\"href\": \"https://sqlite-utils.datasette.io/\", \"label\": \"sqlite-utils\"}, {\"href\": \"https://sqlite-utils.datasette.io/en/latest/python-api.html#enabling-full-text-search\", \"label\": \"it from Python code\"}, {\"href\": \"https://sqlite-utils.datasette.io/en/latest/cli.html#configuring-full-text-search\", \"label\": \"using the accompanying command-line tool\"}]"} {"id": "full_text_search:full-text-search-advanced-queries", "page": "full_text_search", "ref": "full-text-search-advanced-queries", "title": "Advanced SQLite search queries", "content": "SQLite full-text search includes support for a variety of advanced queries , including AND , OR , NOT and NEAR . \n By default Datasette disables these features to ensure they do not cause errors or confusion for users who are not aware of them. You can disable this escaping and use the advanced queries by adding &_searchmode=raw to the table page query string. \n If you want to enable these operators by default for a specific table, you can do so by adding \"searchmode\": \"raw\" to the metadata configuration for that table, see Configuring full-text search for a table or view . \n If that option has been specified in the table metadata but you want to over-ride it and return to the default behavior you can append &_searchmode=escaped to the query string.", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://www.sqlite.org/fts5.html#full_text_query_syntax\", \"label\": \"a variety of advanced queries\"}]"} {"id": "full_text_search:full-text-search-custom-sql", "page": "full_text_search", "ref": "full-text-search-custom-sql", "title": "Searches using custom SQL", "content": "You can include full-text search results in custom SQL queries. The general pattern with SQLite search is to run the search as a sub-select that returns rowid values, then include those rowids in another part of the query. \n You can see the syntax for a basic search by running that search on a table page and then clicking \"View and edit SQL\" to see the underlying SQL. For example, consider this search for manafort is the US FARA database : \n /fara/FARA_All_ShortForms?_search=manafort \n If you click View and edit SQL you'll see that the underlying SQL looks like this: \n select\n rowid,\n Short_Form_Termination_Date,\n Short_Form_Date,\n Short_Form_Last_Name,\n Short_Form_First_Name,\n Registration_Number,\n Registration_Date,\n Registrant_Name,\n Address_1,\n Address_2,\n City,\n State,\n Zip\nfrom\n FARA_All_ShortForms\nwhere\n rowid in (\n select\n rowid\n from\n FARA_All_ShortForms_fts\n where\n FARA_All_ShortForms_fts match escape_fts(:search)\n )\norder by\n rowid\nlimit\n 101", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"manafort is the US FARA database\"}, {\"href\": \"https://fara.datasettes.com/fara?sql=select%0D%0A++rowid%2C%0D%0A++Short_Form_Termination_Date%2C%0D%0A++Short_Form_Date%2C%0D%0A++Short_Form_Last_Name%2C%0D%0A++Short_Form_First_Name%2C%0D%0A++Registration_Number%2C%0D%0A++Registration_Date%2C%0D%0A++Registrant_Name%2C%0D%0A++Address_1%2C%0D%0A++Address_2%2C%0D%0A++City%2C%0D%0A++State%2C%0D%0A++Zip%0D%0Afrom%0D%0A++FARA_All_ShortForms%0D%0Awhere%0D%0A++rowid+in+%28%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++FARA_All_ShortForms_fts%0D%0A++++where%0D%0A++++++FARA_All_ShortForms_fts+match+escape_fts%28%3Asearch%29%0D%0A++%29%0D%0Aorder+by%0D%0A++rowid%0D%0Alimit%0D%0A++101&search=manafort\", \"label\": \"View and edit SQL\"}]"} {"id": "full_text_search:full-text-search-enabling", "page": "full_text_search", "ref": "full-text-search-enabling", "title": "Enabling full-text search for a SQLite table", "content": "Datasette takes advantage of the external content mechanism in SQLite, which allows a full-text search virtual table to be associated with the contents of another SQLite table. \n To set up full-text search for a table, you need to do two things: \n \n \n Create a new FTS virtual table associated with your table \n \n \n Populate that FTS table with the data that you would like to be able to run searches against", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://www.sqlite.org/fts3.html#_external_content_fts4_tables_\", \"label\": \"external content\"}]"} {"id": "full_text_search:full-text-search-fts-versions", "page": "full_text_search", "ref": "full-text-search-fts-versions", "title": "FTS versions", "content": "There are three different versions of the SQLite FTS module: FTS3, FTS4 and FTS5. You can tell which versions are supported by your instance of Datasette by checking the /-/versions page. \n FTS5 is the most advanced module but may not be available in the SQLite version that is bundled with your Python installation. Most importantly, FTS5 is the only version that has the ability to order by search relevance without needing extra code. \n If you can't be sure that FTS5 will be available, you should use FTS4.", "breadcrumbs": "[\"Full-text search\"]", "references": "[]"} {"id": "full_text_search:full-text-search-table-or-view", "page": "full_text_search", "ref": "full-text-search-table-or-view", "title": "Configuring full-text search for a table or view", "content": "If a table has a corresponding FTS table set up using the content= argument to CREATE VIRTUAL TABLE shown below, Datasette will detect it automatically and add a search interface to the table page for that table. \n You can also manually configure which table should be used for full-text search using query string parameters or Metadata . You can set the associated FTS table for a specific table and you can also set one for a view - if you do that, the page for that SQL view will offer a search option. \n Use ?_fts_table=x to over-ride the FTS table for a specific page. If the primary key was something other than rowid you can use ?_fts_pk=col to set that as well. This is particularly useful for views, for example: \n https://latest.datasette.io/fixtures/searchable_view?_fts_table=searchable_fts&_fts_pk=pk \n The fts_table metadata property can be used to specify an associated FTS table. If the primary key column in your table which was used to populate the FTS table is something other than rowid , you can specify the column to use with the fts_pk property. \n The \"searchmode\": \"raw\" property can be used to default the table to accepting SQLite advanced search operators, as described in Advanced SQLite search queries . \n Here is an example which enables full-text search (with SQLite advanced search operators) for a display_ads view which is defined against the ads table and hence needs to run FTS against the ads_fts table, using the id as the primary key: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"databases\": {\n \"russian-ads\": {\n \"tables\": {\n \"display_ads\": {\n \"fts_table\": \"ads_fts\",\n \"fts_pk\": \"id\",\n \"searchmode\": \"raw\"\n }\n }\n }\n }\n}) \n ]]] \n [[[end]]]", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/searchable_view?_fts_table=searchable_fts&_fts_pk=pk\", \"label\": \"https://latest.datasette.io/fixtures/searchable_view?_fts_table=searchable_fts&_fts_pk=pk\"}]"} {"id": "full_text_search:full-text-search-table-view-api", "page": "full_text_search", "ref": "full-text-search-table-view-api", "title": "The table page and table view API", "content": "Table views that support full-text search can be queried using the ?_search=TERMS query string parameter. This will run the search against content from all of the columns that have been included in the index. \n Try this example: fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort \n SQLite full-text search supports wildcards. This means you can easily implement prefix auto-complete by including an asterisk at the end of the search term - for example: \n /dbname/tablename/?_search=rob* \n This will return all records containing at least one word that starts with the letters rob . \n You can also run searches against just the content of a specific named column by using _search_COLNAME=TERMS - for example, this would search for just rows where the name column in the FTS index mentions Sarah : \n /dbname/tablename/?_search_name=Sarah", "breadcrumbs": "[\"Full-text search\"]", "references": "[{\"href\": \"https://fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\", \"label\": \"fara.datasettes.com/fara/FARA_All_ShortForms?_search=manafort\"}]"} {"id": "full_text_search:id1", "page": "full_text_search", "ref": "id1", "title": "Full-text search", "content": "SQLite includes a powerful mechanism for enabling full-text search against SQLite records. Datasette can detect if a table has had full-text search configured for it in the underlying database and display a search interface for filtering that table. \n Here's an example search : \n \n Datasette automatically detects which tables have been configured for full-text search.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://www.sqlite.org/fts3.html\", \"label\": \"a powerful mechanism for enabling full-text search\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/regmem/items?_search=hamper&_sort_desc=date\", \"label\": \"an example search\"}]"} {"id": "getting_started:getting-started", "page": "getting_started", "ref": "getting-started", "title": "Getting started", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "getting_started:getting-started-datasette-lite", "page": "getting_started", "ref": "getting-started-datasette-lite", "title": "Datasette in your browser with Datasette Lite", "content": "Datasette Lite is Datasette packaged using WebAssembly so that it runs entirely in your browser, no Python web application server required. \n You can pass a URL to a CSV, SQLite or raw SQL file directly to Datasette Lite to explore that data in your browser. \n This example link opens Datasette Lite and loads the SQL Murder Mystery example database from Northwestern University Knight Lab .", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://lite.datasette.io/\", \"label\": \"Datasette Lite\"}, {\"href\": \"https://lite.datasette.io/?url=https%3A%2F%2Fraw.githubusercontent.com%2FNUKnightLab%2Fsql-mysteries%2Fmaster%2Fsql-murder-mystery.db#/sql-murder-mystery\", \"label\": \"example link\"}, {\"href\": \"https://github.com/NUKnightLab/sql-mysteries\", \"label\": \"Northwestern University Knight Lab\"}]"} {"id": "getting_started:getting-started-demo", "page": "getting_started", "ref": "getting-started-demo", "title": "Play with a live demo", "content": "The best way to experience Datasette for the first time is with a demo: \n \n \n global-power-plants.datasettes.com provides a searchable database of power plants around the world, using data from the World Resources Institude rendered using the datasette-cluster-map plugin. \n \n \n fivethirtyeight.datasettes.com shows Datasette running against over 400 datasets imported from the FiveThirtyEight GitHub repository .", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://global-power-plants.datasettes.com/global-power-plants/global-power-plants\", \"label\": \"global-power-plants.datasettes.com\"}, {\"href\": \"https://www.wri.org/publication/global-power-plant-database\", \"label\": \"World Resources Institude\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight\", \"label\": \"fivethirtyeight.datasettes.com\"}, {\"href\": \"https://github.com/fivethirtyeight/data\", \"label\": \"FiveThirtyEight GitHub repository\"}]"} {"id": "getting_started:getting-started-glitch", "page": "getting_started", "ref": "getting-started-glitch", "title": "Try Datasette without installing anything using Glitch", "content": "Glitch is a free online tool for building web apps directly from your web browser. You can use Glitch to try out Datasette without needing to install any software on your own computer. \n Here's a demo project on Glitch which you can use as the basis for your own experiments: \n glitch.com/~datasette-csvs \n Glitch allows you to \"remix\" any project to create your own copy and start editing it in your browser. You can remix the datasette-csvs project by clicking this button: \n \n Find a CSV file and drag it onto the Glitch file explorer panel - datasette-csvs will automatically convert it to a SQLite database (using sqlite-utils ) and allow you to start exploring it using Datasette. \n If your CSV file has a latitude and longitude column you can visualize it on a map by uncommenting the datasette-cluster-map line in the requirements.txt file using the Glitch file editor. \n Need some data? Try this Public Art Data for the city of Seattle - hit \"Export\" and select \"CSV\" to download it as a CSV file. \n For more on how this works, see Running Datasette on Glitch .", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://glitch.com/\", \"label\": \"Glitch\"}, {\"href\": \"https://glitch.com/~datasette-csvs\", \"label\": \"glitch.com/~datasette-csvs\"}, {\"href\": \"https://glitch.com/edit/#!/remix/datasette-csvs\", \"label\": null}, {\"href\": \"https://github.com/simonw/sqlite-utils\", \"label\": \"sqlite-utils\"}, {\"href\": \"https://data.seattle.gov/Community/Public-Art-Data/j7sn-tdzk\", \"label\": \"Public Art Data\"}, {\"href\": \"https://simonwillison.net/2019/Apr/23/datasette-glitch/\", \"label\": \"Running Datasette on Glitch\"}]"} {"id": "getting_started:getting-started-tutorial", "page": "getting_started", "ref": "getting-started-tutorial", "title": "Follow a tutorial", "content": "Datasette has several tutorials to help you get started with the tool. Try one of the following: \n \n \n Exploring a database with Datasette shows how to use the Datasette web interface to explore a new database. \n \n \n Learn SQL with Datasette introduces SQL, and shows how to use that query language to ask questions of your data. \n \n \n Cleaning data with sqlite-utils and Datasette guides you through using sqlite-utils to turn a CSV file into a database that you can explore using Datasette.", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"https://datasette.io/tutorials\", \"label\": \"tutorials\"}, {\"href\": \"https://datasette.io/tutorials/explore\", \"label\": \"Exploring a database with Datasette\"}, {\"href\": \"https://datasette.io/tutorials/learn-sql\", \"label\": \"Learn SQL with Datasette\"}, {\"href\": \"https://datasette.io/tutorials/clean-data\", \"label\": \"Cleaning data with sqlite-utils and Datasette\"}, {\"href\": \"https://sqlite-utils.datasette.io/\", \"label\": \"sqlite-utils\"}]"} {"id": "getting_started:getting-started-your-computer", "page": "getting_started", "ref": "getting-started-your-computer", "title": "Using Datasette on your own computer", "content": "First, follow the Installation instructions. Now you can run Datasette against a SQLite file on your computer using the following command: \n datasette path/to/database.db \n This will start a web server on port 8001 - visit http://localhost:8001/ \n to access the web interface. \n Add -o to open your browser automatically once Datasette has started: \n datasette path/to/database.db -o \n Use Chrome on OS X? You can run datasette against your browser history\n like so: \n datasette ~/Library/Application\\ Support/Google/Chrome/Default/History --nolock \n The --nolock option ignores any file locks. This is safe as Datasette will open the file in read-only mode. \n Now visiting http://localhost:8001/History/downloads will show you a web\n interface to browse your downloads data: \n \n \n \n http://localhost:8001/History/downloads.json will return that data as\n JSON: \n {\n \"database\": \"History\",\n \"columns\": [\n \"id\",\n \"current_path\",\n \"target_path\",\n \"start_time\",\n \"received_bytes\",\n \"total_bytes\",\n ...\n ],\n \"rows\": [\n [\n 1,\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n \"/Users/simonw/Downloads/DropboxInstaller.dmg\",\n 13097290269022132,\n 626688,\n 0,\n ...\n ]\n ]\n} \n http://localhost:8001/History/downloads.json?_shape=objects will return that data as\n JSON in a more convenient format: \n {\n ...\n \"rows\": [\n {\n \"start_time\": 13097290269022132,\n \"interrupt_reason\": 0,\n \"hash\": \"\",\n \"id\": 1,\n \"site_url\": \"\",\n \"referrer\": \"https://www.dropbox.com/downloading?src=index\",\n ...\n }\n ]\n}", "breadcrumbs": "[\"Getting started\"]", "references": "[{\"href\": \"http://localhost:8001/\", \"label\": \"http://localhost:8001/\"}, {\"href\": \"http://localhost:8001/History/downloads\", \"label\": \"http://localhost:8001/History/downloads\"}, {\"href\": \"http://localhost:8001/History/downloads.json\", \"label\": \"http://localhost:8001/History/downloads.json\"}, {\"href\": \"http://localhost:8001/History/downloads.json?_shape=objects\", \"label\": \"http://localhost:8001/History/downloads.json?_shape=objects\"}]"} {"id": "index:contents", "page": "index", "ref": "contents", "title": "Contents", "content": "Getting started Play with a live demo Follow a tutorial Datasette in your browser with Datasette Lite Try Datasette without installing anything using Glitch Using Datasette on your own computer Installation Basic installation Datasette Desktop for Mac Using Homebrew Using pip Advanced installation options Using pipx Using Docker A note about extensions Configuration Configuration via the command-line datasette.yaml reference Settings Plugin configuration Permissions configuration Canned queries configuration Custom CSS and JavaScript The Datasette Ecosystem sqlite-utils Dogsheep CLI reference datasette --help datasette serve datasette --get datasette serve --help-settings datasette plugins datasette install datasette uninstall datasette publish datasette publish cloudrun datasette publish heroku datasette package datasette inspect datasette create-token Pages and API endpoints Top-level index Database Hidden tables Table Row Publishing data datasette publish Publishing to Google Cloud Run Publishing to Heroku Publishing to Vercel Publishing to Fly Custom metadata and plugins datasette package Deploying Datasette Deployment fundamentals Running Datasette using systemd Running Datasette using OpenRC Deploying using buildpacks Running Datasette behind a proxy Nginx proxy configuration Apache proxy configuration JSON API Default representation Different shapes Pagination Special JSON arguments Table arguments Column filter arguments Special table arguments Expanding foreign key references Discovering the JSON for a page Enabling CORS The JSON write API Inserting rows Upserting rows Updating a row Deleting a row Creating a table Creating a table from example data Dropping tables Running SQL queries Named parameters Views Canned queries Canned query parameters Additional canned query options Writable canned queries Magic parameters JSON API for writable canned queries Pagination Cross-database queries Authentication and permissions Actors Using the \"root\" actor Permissions How permissions are resolved Defining permissions with \"allow\" blocks The /-/allow-debug tool Access permissions in datasette.yaml Access to an instance Access to specific databases Access to specific tables and views Access to specific canned queries Controlling the ability to execute arbitrary SQL Other permissions in datasette.yaml API Tokens datasette create-token Checking permissions in plugins actor_matches_allow() The permissions debug tool The ds_actor cookie Including an expiry time The /-/logout page Built-in permissions view-instance view-database view-database-download view-table view-query insert-row delete-row update-row create-table alter-table drop-table execute-sql permissions-debug debug-menu Performance and caching Immutable mode Using \"datasette inspect\" HTTP caching datasette-hashed-urls CSV export URL parameters Streaming all records Binary data Linking to binary downloads Binary plugins Facets Facets in query strings Facets in metadata Suggested facets Speeding up facets with indexes Facet by JSON array Facet by date Full-text search The table page and table view API Advanced SQLite search queries Configuring full-text search for a table or view Searches using custom SQL Enabling full-text search for a SQLite table Configuring FTS using sqlite-utils Configuring FTS using csvs-to-sqlite Configuring FTS by hand FTS versions SpatiaLite Warning Installation Installing SpatiaLite on OS X Installing SpatiaLite on Linux Spatial indexing latitude/longitude columns Making use of a spatial index Importing shapefiles into SpatiaLite Importing GeoJSON polygons using Shapely Querying polygons using within() Metadata Per-database and per-table metadata Source, license and about Column descriptions Specifying units for a column Setting a default sort order Setting a custom page size Setting which columns can be used for sorting Specifying the label column for a table Hiding tables Metadata reference Top-level metadata Database-level metadata Table-level metadata Settings Using --setting Configuration directory mode Settings default_allow_sql default_page_size sql_time_limit_ms max_returned_rows max_insert_rows num_sql_threads allow_facet default_facet_size facet_time_limit_ms facet_suggest_time_limit_ms suggest_facets allow_download allow_signed_tokens max_signed_tokens_ttl default_cache_ttl cache_size_kb allow_csv_stream max_csv_mb truncate_cells_html force_https_urls template_debug trace_debug base_url Configuring the secret Using secrets with datasette publish Introspection /-/metadata /-/versions /-/plugins /-/settings /-/config /-/databases /-/threads /-/actor /-/messages Custom pages and templates CSS classes on the Serving static files Publishing static assets Custom templates Custom pages Path parameters for pages Custom headers and status codes Returning 404s Custom redirects Custom error pages Plugins Installing plugins One-off plugins using --plugins-dir Deploying plugins using datasette publish Controlling which plugins are loaded Seeing what plugins are installed Plugin configuration Secret configuration values Writing plugins Tracing plugin hooks Writing one-off plugins Starting an installable plugin using cookiecutter Packaging a plugin Static assets Custom templates Writing plugins that accept configuration Designing URLs for your plugin Building URLs within plugins Plugins that define new plugin hooks JavaScript plugins The datasette_init event datasetteManager JavaScript plugin objects makeAboveTablePanelConfigs() makeColumnActions(columnDetails) Selectors Plugin hooks prepare_connection(conn, database, datasette) prepare_jinja2_environment(env, datasette) Page extras extra_template_vars(template, database, table, columns, view_name, request, datasette) extra_css_urls(template, database, table, columns, view_name, request, datasette) extra_js_urls(template, database, table, columns, view_name, request, datasette) extra_body_script(template, database, table, columns, view_name, request, datasette) publish_subcommand(publish) render_cell(row, value, column, table, database, datasette, request) register_output_renderer(datasette) register_routes(datasette) register_commands(cli) register_facet_classes() register_permissions(datasette) asgi_wrapper(datasette) startup(datasette) canned_queries(datasette, database, actor) actor_from_request(datasette, request) actors_from_ids(datasette, actor_ids) jinja2_environment_from_request(datasette, request, env) filters_from_request(request, database, table, datasette) permission_allowed(datasette, actor, action, resource) register_magic_parameters(datasette) forbidden(datasette, request, message) handle_exception(datasette, request, exception) skip_csrf(datasette, scope) get_metadata(datasette, key, database, table) menu_links(datasette, actor, request) Action hooks table_actions(datasette, actor, database, table, request) view_actions(datasette, actor, database, view, request) query_actions(datasette, actor, database, query_name, request, sql, params) row_actions(datasette, actor, request, database, table, row) database_actions(datasette, actor, database, request) homepage_actions(datasette, actor, request) Template slots top_homepage(datasette, request) top_database(datasette, request, database) top_table(datasette, request, database, table) top_row(datasette, request, database, table, row) top_query(datasette, request, database, sql) top_canned_query(datasette, request, database, query_name) Event tracking track_event(datasette, event) register_events(datasette) Testing plugins Setting up a Datasette test instance Using datasette.client in tests Using pdb for errors thrown inside Datasette Using pytest fixtures Testing outbound HTTP calls with pytest-httpx Registering a plugin for the duration of a test Internals for plugins Request object The MultiParams class Response class Returning a response with .asgi_send(send) Setting cookies with response.set_cookie() Datasette class .databases .permissions .plugin_config(plugin_name, database=None, table=None) await .render_template(template, context=None, request=None) await .actors_from_ids(actor_ids) await .permission_allowed(actor, action, resource=None, default=...) await .ensure_permissions(actor, permissions) await .check_visibility(actor, action=None, resource=None, permissions=None) .create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None) .get_permission(name_or_abbr) .get_database(name) .get_internal_database() .add_database(db, name=None, route=None) .add_memory_database(name) .remove_database(name) await .track_event(event) .sign(value, namespace=\"default\") .unsign(value, namespace=\"default\") .add_message(request, message, type=datasette.INFO) .absolute_url(request, path) .setting(key) .resolve_database(request) .resolve_table(request) .resolve_row(request) datasette.client datasette.urls Database class Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None) db.hash await db.execute(sql, ...) Results await db.execute_fn(fn) await db.execute_write(sql, params=None, block=True) await db.execute_write_script(sql, block=True) await db.execute_write_many(sql, params_seq, block=True) await db.execute_write_fn(fn, block=True, transaction=True) await db.execute_isolated_fn(fn) db.close() Database introspection CSRF protection Datasette's internal database The datasette.utils module parse_metadata(content) await_me_maybe(value) derive_named_parameters(db, sql) Tilde encoding datasette.tracer Tracing child tasks Import shortcuts Events LoginEvent LogoutEvent CreateTokenEvent CreateTableEvent DropTableEvent AlterTableEvent InsertRowsEvent UpsertRowsEvent UpdateRowEvent DeleteRowEvent Contributing General guidelines Setting up a development environment Running the tests Using fixtures Debugging Code formatting Running Black blacken-docs Prettier Editing and building the documentation Running Cog Continuously deployed demo instances Release process Alpha and beta releases Releasing bug fixes from a branch Upgrading CodeMirror Changelog 1.0a13 (2024-03-12) 1.0a12 (2024-02-29) 1.0a11 (2024-02-19) 1.0a10 (2024-02-17) 1.0a9 (2024-02-16) Alter table support for create, insert, upsert and update Permissions fix for the upsert API Permission checks now consider opinions from every plugin Other changes 1.0a8 (2024-02-07) Configuration JavaScript plugins Plugin hooks Documentation Minor fixes 0.64.6 (2023-12-22) 0.64.5 (2023-10-08) 1.0a7 (2023-09-21) 0.64.4 (2023-09-21) 1.0a6 (2023-09-07) 1.0a5 (2023-08-29) 1.0a4 (2023-08-21) 1.0a3 (2023-08-09) Smaller changes 0.64.2 (2023-03-08) 0.64.1 (2023-01-11) 0.64 (2023-01-09) 0.63.3 (2022-12-17) 1.0a2 (2022-12-14) 1.0a1 (2022-12-01) 1.0a0 (2022-11-29) Signed API tokens Write API 0.63.2 (2022-11-18) 0.63.1 (2022-11-10) 0.63 (2022-10-27) Features Plugin hooks and internals Documentation 0.62 (2022-08-14) Features Plugin hooks Bug fixes Documentation 0.61.1 (2022-03-23) 0.61 (2022-03-23) 0.60.2 (2022-02-07) 0.60.1 (2022-01-20) 0.60 (2022-01-13) Plugins and internals Faceting Other small fixes 0.59.4 (2021-11-29) 0.59.3 (2021-11-20) 0.59.2 (2021-11-13) 0.59.1 (2021-10-24) 0.59 (2021-10-14) 0.58.1 (2021-07-16) 0.58 (2021-07-14) 0.57.1 (2021-06-08) 0.57 (2021-06-05) New features Bug fixes and other improvements 0.56.1 (2021-06-05) 0.56 (2021-03-28) 0.55 (2021-02-18) 0.54.1 (2021-02-02) 0.54 (2021-01-25) The _internal database Named in-memory database support JavaScript modules Code formatting with Black and Prettier Other changes 0.53 (2020-12-10) 0.52.5 (2020-12-09) 0.52.4 (2020-12-05) 0.52.3 (2020-12-03) 0.52.2 (2020-12-02) 0.52.1 (2020-11-29) 0.52 (2020-11-28) 0.51.1 (2020-10-31) 0.51 (2020-10-31) New visual design Plugins can now add links within Datasette Binary data URL building Running Datasette behind a proxy Smaller changes 0.50.2 (2020-10-09) 0.50.1 (2020-10-09) 0.50 (2020-10-09) 0.49.1 (2020-09-15) 0.49 (2020-09-14) 0.48 (2020-08-16) 0.47.3 (2020-08-15) 0.47.2 (2020-08-12) 0.47.1 (2020-08-11) 0.47 (2020-08-11) 0.46 (2020-08-09) 0.45 (2020-07-01) Magic parameters for canned queries Log out Better plugin documentation New plugin hooks Smaller changes 0.44 (2020-06-11) Authentication Permissions Writable canned queries Flash messages Signed values and secrets CSRF protection Cookie methods register_routes() plugin hooks Smaller changes The road to Datasette 1.0 0.43 (2020-05-28) 0.42 (2020-05-08) 0.41 (2020-05-06) 0.40 (2020-04-21) 0.39 (2020-03-24) 0.38 (2020-03-08) 0.37.1 (2020-03-02) 0.37 (2020-02-25) 0.36 (2020-02-21) 0.35 (2020-02-04) 0.34 (2020-01-29) 0.33 (2019-12-22) 0.32 (2019-11-14) 0.31.2 (2019-11-13) 0.31.1 (2019-11-12) 0.31 (2019-11-11) 0.30.2 (2019-11-02) 0.30.1 (2019-10-30) 0.30 (2019-10-18) 0.29.3 (2019-09-02) 0.29.2 (2019-07-13) 0.29.1 (2019-07-11) 0.29 (2019-07-07) ASGI New plugin hook: asgi_wrapper New plugin hook: extra_template_vars Secret plugin configuration options Facet by date Easier custom templates for table rows ?_through= for joins through many-to-many tables Small changes 0.28 (2019-05-19) Supporting databases that change Faceting improvements, and faceting plugins datasette publish cloudrun register_output_renderer plugins Medium changes Small changes 0.27.1 (2019-05-09) 0.27 (2019-01-31) 0.26.1 (2019-01-10) 0.26 (2019-01-02) 0.25.2 (2018-12-16) 0.25.1 (2018-11-04) 0.25 (2018-09-19) 0.24 (2018-07-23) 0.23.2 (2018-07-07) 0.23.1 (2018-06-21) 0.23 (2018-06-18) CSV export Foreign key expansions New configuration settings Control HTTP caching with ?_ttl= Improved support for SpatiaLite latest.datasette.io Miscellaneous 0.22.1 (2018-05-23) 0.22 (2018-05-20) 0.21 (2018-05-05) 0.20 (2018-04-20) 0.19 (2018-04-16) 0.18 (2018-04-14) 0.17 (2018-04-13) 0.16 (2018-04-13) 0.15 (2018-04-09) 0.14 (2017-12-09) 0.13 (2017-11-24) 0.12 (2017-11-16) 0.11 (2017-11-14) 0.10 (2017-11-14) 0.9 (2017-11-13) 0.8 (2017-11-13)", "breadcrumbs": "[\"Datasette\"]", "references": "[]"} {"id": "index:datasette", "page": "index", "ref": "datasette", "title": "Datasette", "content": "An open source multi-tool for exploring and publishing data \n Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API. \n Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is part of a wider ecosystem of tools and plugins dedicated to making working with structured data as productive as possible. \n Explore a demo , watch a presentation about the project or Try Datasette without installing anything using Glitch . \n Interested in learning Datasette? Start with the official tutorials . \n Support questions, feedback? Join the Datasette Discord .", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://docs.datasette.io/en/stable/changelog.html\", \"label\": null}, {\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/actions?query=workflow%3ATest\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/blob/main/LICENSE\", \"label\": null}, {\"href\": \"https://hub.docker.com/r/datasetteproject/datasette\", \"label\": null}, {\"href\": \"https://datasette.io/discord\", \"label\": null}, {\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://docs.datasette.io/en/stable/changelog.html\", \"label\": null}, {\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/actions?query=workflow%3ATest\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/blob/main/LICENSE\", \"label\": null}, {\"href\": \"https://hub.docker.com/r/datasetteproject/datasette\", \"label\": null}, {\"href\": \"https://datasette.io/discord\", \"label\": null}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight\", \"label\": \"Explore a demo\"}, {\"href\": \"https://static.simonwillison.net/static/2018/pybay-datasette/\", \"label\": \"a presentation about the project\"}, {\"href\": \"https://datasette.io/tutorials\", \"label\": \"the official tutorials\"}, {\"href\": \"https://datasette.io/discord\", \"label\": \"Datasette Discord\"}]"} {"id": "installation:id1", "page": "installation", "ref": "id1", "title": "Installation", "content": "If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch \n \n There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. \n If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . \n \n \n \n Basic installation \n \n \n Datasette Desktop for Mac \n \n \n Using Homebrew \n \n \n Using pip \n \n \n \n \n Advanced installation options \n \n \n Using pipx \n \n \n Installing plugins using pipx \n \n \n Upgrading packages using pipx \n \n \n \n \n Using Docker \n \n \n Loading SpatiaLite \n \n \n Installing plugins \n \n \n \n \n \n \n A note about extensions", "breadcrumbs": "[]", "references": "[]"} {"id": "installation:installation-advanced", "page": "installation", "ref": "installation-advanced", "title": "Advanced installation options", "content": "", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installation-basic", "page": "installation", "ref": "installation-basic", "title": "Basic installation", "content": "", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installation-datasette-desktop", "page": "installation", "ref": "installation-datasette-desktop", "title": "Datasette Desktop for Mac", "content": "Datasette Desktop is a packaged Mac application which bundles Datasette together with Python and allows you to install and run Datasette directly on your laptop. This is the best option for local installation if you are not comfortable using the command line.", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://datasette.io/desktop\", \"label\": \"Datasette Desktop\"}]"} {"id": "installation:installation-docker", "page": "installation", "ref": "installation-docker", "title": "Using Docker", "content": "A Docker image containing the latest release of Datasette is published to Docker\n Hub here: https://hub.docker.com/r/datasetteproject/datasette/ \n If you have Docker installed (for example with Docker for Mac on OS X) you can download and run this\n image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n This will start an instance of Datasette running on your machine's port 8001,\n serving the fixtures.db file in your current directory. \n Now visit http://127.0.0.1:8001/ to access Datasette. \n (You can download a copy of fixtures.db from\n https://latest.datasette.io/fixtures.db ) \n To upgrade to the most recent release of Datasette, run the following: \n docker pull datasetteproject/datasette", "breadcrumbs": "[\"Installation\", \"Advanced installation options\"]", "references": "[{\"href\": \"https://hub.docker.com/r/datasetteproject/datasette/\", \"label\": \"https://hub.docker.com/r/datasetteproject/datasette/\"}, {\"href\": \"https://www.docker.com/docker-mac\", \"label\": \"Docker for Mac\"}, {\"href\": \"http://127.0.0.1:8001/\", \"label\": \"http://127.0.0.1:8001/\"}, {\"href\": \"https://latest.datasette.io/fixtures.db\", \"label\": \"https://latest.datasette.io/fixtures.db\"}]"} {"id": "installation:installation-extensions", "page": "installation", "ref": "installation-extensions", "title": "A note about extensions", "content": "SQLite supports extensions, such as SpatiaLite for geospatial operations. \n These can be loaded using the --load-extension argument, like so: \n datasette --load-extension=/usr/local/lib/mod_spatialite.dylib \n Some Python installations do not include support for SQLite extensions. If this is the case you will see the following error when you attempt to load an extension: \n \n Your Python installation does not have the ability to load SQLite extensions. \n \n In some cases you may see the following error message instead: \n AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension' \n On macOS the easiest fix for this is to install Datasette using Homebrew: \n brew install datasette \n Use which datasette to confirm that datasette will run that version. The output should look something like this: \n /usr/local/opt/datasette/bin/datasette \n If you get a different location here such as /Library/Frameworks/Python.framework/Versions/3.10/bin/datasette you can run the following command to cause datasette to execute the Homebrew version instead: \n alias datasette=$(echo $(brew --prefix datasette)/bin/datasette) \n You can undo this operation using: \n unalias datasette \n If you need to run SQLite with extension support for other Python code, you can do so by install Python itself using Homebrew: \n brew install python \n Then executing Python using: \n /usr/local/opt/python@3/libexec/bin/python \n A more convenient way to work with this version of Python may be to use it to create a virtual environment: \n /usr/local/opt/python@3/libexec/bin/python -m venv datasette-venv \n Then activate it like this: \n source datasette-venv/bin/activate \n Now running python and pip will work against a version of Python 3 that includes support for SQLite extensions: \n pip install datasette\nwhich datasette\ndatasette --version", "breadcrumbs": "[\"Installation\"]", "references": "[]"} {"id": "installation:installation-homebrew", "page": "installation", "ref": "installation-homebrew", "title": "Using Homebrew", "content": "If you have a Mac and use Homebrew , you can install Datasette by running this command in your terminal: \n brew install datasette \n This should install the latest version. You can confirm by running: \n datasette --version \n You can upgrade to the latest Homebrew packaged version using: \n brew upgrade datasette \n Once you have installed Datasette you can install plugins using the following: \n datasette install datasette-vega \n If the latest packaged release of Datasette has not yet been made available through Homebrew, you can upgrade your Homebrew installation in-place using: \n datasette install -U datasette", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "installation:installation-pip", "page": "installation", "ref": "installation-pip", "title": "Using pip", "content": "Datasette requires Python 3.8 or higher. The Python.org Python For Beginners page has instructions for getting started. \n You can install Datasette and its dependencies using pip : \n pip install datasette \n You can now run Datasette like so: \n datasette", "breadcrumbs": "[\"Installation\", \"Basic installation\"]", "references": "[{\"href\": \"https://www.python.org/about/gettingstarted/\", \"label\": \"Python.org Python For Beginners\"}]"} {"id": "installation:installation-pipx", "page": "installation", "ref": "installation-pipx", "title": "Using pipx", "content": "pipx is a tool for installing Python software with all of its dependencies in an isolated environment, to ensure that they will not conflict with any other installed Python software. \n If you use Homebrew on macOS you can install pipx like this: \n brew install pipx\npipx ensurepath \n Without Homebrew you can install it like so: \n python3 -m pip install --user pipx\npython3 -m pipx ensurepath \n The pipx ensurepath command configures your shell to ensure it can find commands that have been installed by pipx - generally by making sure ~/.local/bin has been added to your PATH . \n Once pipx is installed you can use it to install Datasette like this: \n pipx install datasette \n Then run datasette --version to confirm that it has been successfully installed.", "breadcrumbs": "[\"Installation\", \"Advanced installation options\"]", "references": "[{\"href\": \"https://pipxproject.github.io/pipx/\", \"label\": \"pipx\"}, {\"href\": \"https://brew.sh/\", \"label\": \"Homebrew\"}]"} {"id": "installation:installing-plugins", "page": "installation", "ref": "installing-plugins", "title": "Installing plugins", "content": "If you want to install plugins into your local Datasette Docker image you can do\n so using the following recipe. This will install the plugins and then save a\n brand new local image called datasette-with-plugins : \n docker run datasetteproject/datasette \\\n pip install datasette-vega\n\ndocker commit $(docker ps -lq) datasette-with-plugins \n You can now run the new custom image like so: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasette-with-plugins \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \n You can confirm that the plugins are installed by visiting\n http://127.0.0.1:8001/-/plugins \n Some plugins such as datasette-ripgrep may need additional system packages. You can install these by running apt-get install inside the container: \n docker run datasette-057a0 bash -c '\n apt-get update &&\n apt-get install ripgrep &&\n pip install datasette-ripgrep'\n\ndocker commit $(docker ps -lq) datasette-with-ripgrep", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using Docker\"]", "references": "[{\"href\": \"http://127.0.0.1:8001/-/plugins\", \"label\": \"http://127.0.0.1:8001/-/plugins\"}, {\"href\": \"https://datasette.io/plugins/datasette-ripgrep\", \"label\": \"datasette-ripgrep\"}]"} {"id": "installation:installing-plugins-using-pipx", "page": "installation", "ref": "installing-plugins-using-pipx", "title": "Installing plugins using pipx", "content": "You can install additional datasette plugins with pipx inject like so: \n pipx inject datasette datasette-json-html \n injected package datasette-json-html into venv datasette\ndone! \u2728 \ud83c\udf1f \u2728 \n Then to confirm the plugin was installed correctly: \n datasette plugins \n [\n {\n \"name\": \"datasette-json-html\",\n \"static\": false,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"} {"id": "installation:loading-spatialite", "page": "installation", "ref": "loading-spatialite", "title": "Loading SpatiaLite", "content": "The datasetteproject/datasette image includes a recent version of the\n SpatiaLite extension for SQLite. To load and enable that\n module, use the following command: \n docker run -p 8001:8001 -v `pwd`:/mnt \\\n datasetteproject/datasette \\\n datasette -p 8001 -h 0.0.0.0 /mnt/fixtures.db \\\n --load-extension=spatialite \n You can confirm that SpatiaLite is successfully loaded by visiting\n http://127.0.0.1:8001/-/versions", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using Docker\"]", "references": "[{\"href\": \"http://127.0.0.1:8001/-/versions\", \"label\": \"http://127.0.0.1:8001/-/versions\"}]"} {"id": "installation:upgrading-packages-using-pipx", "page": "installation", "ref": "upgrading-packages-using-pipx", "title": "Upgrading packages using pipx", "content": "You can upgrade your pipx installation to the latest release of Datasette using pipx upgrade datasette : \n pipx upgrade datasette \n upgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette) \n To upgrade a plugin within the pipx environment use pipx runpip datasette install -U name-of-plugin - like this: \n datasette plugins \n [\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6\"\n }\n] \n Now upgrade the plugin: \n pipx runpip datasette install -U datasette-vega-0 \n Collecting datasette-vega\nDownloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB)\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.8 MB 2.0 MB/s\n...\nInstalling collected packages: datasette-vega\nAttempting uninstall: datasette-vega\n Found existing installation: datasette-vega 0.6\n Uninstalling datasette-vega-0.6:\n Successfully uninstalled datasette-vega-0.6\nSuccessfully installed datasette-vega-0.6.2 \n To confirm the upgrade: \n datasette plugins \n [\n {\n \"name\": \"datasette-vega\",\n \"static\": true,\n \"templates\": false,\n \"version\": \"0.6.2\"\n }\n]", "breadcrumbs": "[\"Installation\", \"Advanced installation options\", \"Using pipx\"]", "references": "[]"} {"id": "internals:database-close", "page": "internals", "ref": "database-close", "title": "db.close()", "content": "Closes all of the open connections to file-backed databases. This is mainly intended to be used by large test suites, to avoid hitting limits on the number of open files.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-constructor", "page": "internals", "ref": "database-constructor", "title": "Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None)", "content": "The Database() constructor can be used by plugins, in conjunction with .add_database(db, name=None, route=None) , to create and register new databases. \n The arguments are as follows: \n \n \n ds - Datasette class (required) \n \n The Datasette instance you are attaching this database to. \n \n \n \n path - string \n \n Path to a SQLite database file on disk. \n \n \n \n is_mutable - boolean \n \n Set this to False to cause Datasette to open the file in immutable mode. \n \n \n \n is_memory - boolean \n \n Use this to create non-shared memory connections. \n \n \n \n memory_name - string or None \n \n Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process. \n \n \n \n The first argument is the datasette instance you are attaching to, the second is a path= , then is_mutable and is_memory are both optional arguments.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute", "page": "internals", "ref": "database-execute", "title": "await db.execute(sql, ...)", "content": "Executes a SQL query against the database and returns the resulting rows (see Results ). \n \n \n sql - string (required) \n \n The SQL query to execute. This can include ? or :named parameters. \n \n \n \n params - list or dict \n \n A list or dictionary of values to use for the parameters. List for ? , dictionary for :named . \n \n \n \n truncate - boolean \n \n Should the rows returned by the query be truncated at the maximum page size? Defaults to True , set this to False to disable truncation. \n \n \n \n custom_time_limit - integer ms \n \n A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a dataette.database.QueryInterrupted exception. \n \n \n \n page_size - integer \n \n Set a custom page size for truncation, over-riding the configured Datasette default. \n \n \n \n log_sql_errors - boolean \n \n Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to True .", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute-fn", "page": "internals", "ref": "database-execute-fn", "title": "await db.execute_fn(fn)", "content": "Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the await . \n Example usage: \n def get_version(conn):\n return conn.execute(\n \"select sqlite_version()\"\n ).fetchall()[0][0]\n\n\nversion = await db.execute_fn(get_version)", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute-isolated-fn", "page": "internals", "ref": "database-execute-isolated-fn", "title": "await db.execute_isolated_fn(fn)", "content": "This method works is similar to execute_write_fn() but executes the provided function in an entirely isolated SQLite connection, which is opened, used and then closed again in a single call to this method. \n The prepare_connection() plugin hook is not executed against this connection. \n This allows plugins to execute database operations that might conflict with how database connections are usually configured. For example, running a VACUUM operation while bypassing any restrictions placed by the datasette-sqlite-authorizer plugin. \n Plugins can also use this method to load potentially dangerous SQLite extensions, use them to perform an operation and then have them safely unloaded at the end of the call, without risk of exposing them to other connections. \n Functions run using execute_isolated_fn() share the same queue as execute_write_fn() , which guarantees that no writes can be executed at the same time as the isolated function is executing. \n The return value of the function will be returned by this method. Any exceptions raised by the function will be raised out of the await line as well.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://github.com/datasette/datasette-sqlite-authorizer\", \"label\": \"datasette-sqlite-authorizer\"}]"} {"id": "internals:database-execute-write", "page": "internals", "ref": "database-execute-write", "title": "await db.execute_write(sql, params=None, block=True)", "content": "SQLite only allows one database connection to write at a time. Datasette handles this for you by maintaining a queue of writes to be executed against a given database. Plugins can submit write operations to this queue and they will be executed in the order in which they are received. \n This method can be used to queue up a non-SELECT SQL query to be executed against a single write connection to the database. \n You can pass additional SQL parameters as a tuple or dictionary. \n The method will block until the operation is completed, and the return value will be the return from calling conn.execute(...) using the underlying sqlite3 Python library. \n If you pass block=False this behavior changes to \"fire and forget\" - queries will be added to the write queue and executed in a separate thread while your code can continue to do other things. The method will return a UUID representing the queued task. \n Each call to execute_write() will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute-write-fn", "page": "internals", "ref": "database-execute-write-fn", "title": "await db.execute_write_fn(fn, block=True, transaction=True)", "content": "This method works like .execute_write() , but instead of a SQL statement you give it a callable Python function. Your function will be queued up and then called when the write connection is available, passing that connection as the argument to the function. \n The function can then perform multiple actions, safe in the knowledge that it has exclusive access to the single writable connection for as long as it is executing. \n \n fn needs to be a regular function, not an async def function. \n \n For example: \n def delete_and_return_count(conn):\n conn.execute(\"delete from some_table where id > 5\")\n return conn.execute(\n \"select count(*) from some_table\"\n ).fetchone()[0]\n\n\ntry:\n num_rows_left = await database.execute_write_fn(\n delete_and_return_count\n )\nexcept Exception as e:\n print(\"An error occurred:\", e) \n The value returned from await database.execute_write_fn(...) will be the return value from your function. \n If your function raises an exception that exception will be propagated up to the await line. \n By default your function will be executed inside a transaction. You can pass transaction=False to disable this behavior, though if you do that you should be careful to manually apply transactions - ideally using the with conn: pattern, or you may see OperationalError: database table is locked errors. \n If you specify block=False the method becomes fire-and-forget, queueing your function to be executed and then allowing your code after the call to .execute_write_fn() to continue running while the underlying thread waits for an opportunity to run your function. A UUID representing the queued task will be returned. Any exceptions in your code will be silently swallowed.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-execute-write-many", "page": "internals", "ref": "database-execute-write-many", "title": "await db.execute_write_many(sql, params_seq, block=True)", "content": "Like execute_write() but uses the sqlite3 conn.executemany() method. This will efficiently execute the same SQL statement against each of the parameters in the params_seq iterator, for example: \n await db.execute_write_many(\n \"insert into characters (id, name) values (?, ?)\",\n [(1, \"Melanie\"), (2, \"Selma\"), (2, \"Viktor\")],\n) \n Each call to execute_write_many() will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executemany\", \"label\": \"conn.executemany()\"}]"} {"id": "internals:database-execute-write-script", "page": "internals", "ref": "database-execute-write-script", "title": "await db.execute_write_script(sql, block=True)", "content": "Like execute_write() but can be used to send multiple SQL statements in a single string separated by semicolons, using the sqlite3 conn.executescript() method. \n Each call to execute_write_script() will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executescript\", \"label\": \"conn.executescript()\"}]"} {"id": "internals:database-hash", "page": "internals", "ref": "database-hash", "title": "db.hash", "content": "If the database was opened in immutable mode, this property returns the 64 character SHA-256 hash of the database contents as a string. Otherwise it returns None .", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"} {"id": "internals:database-results", "page": "internals", "ref": "database-results", "title": "Results", "content": "The db.execute() method returns a single Results object. This can be used to access the rows returned by the query. \n Iterating over a Results object will yield SQLite Row objects . Each of these can be treated as a tuple or can be accessed using row[\"column\"] syntax: \n info = []\nresults = await db.execute(\"select name from sqlite_master\")\nfor row in results:\n info.append(row[\"name\"]) \n The Results object also has the following properties and methods: \n \n \n .truncated - boolean \n \n Indicates if this query was truncated - if it returned more results than the specified page_size . If this is true then the results object will only provide access to the first page_size rows in the query result. You can disable truncation by passing truncate=False to the db.query() method. \n \n \n \n .columns - list of strings \n \n A list of column names returned by the query. \n \n \n \n .rows - list of sqlite3.Row \n \n This property provides direct access to the list of rows returned by the database. You can access specific rows by index using results.rows[0] . \n \n \n \n .first() - row or None \n \n Returns the first row in the results, or None if no rows were returned. \n \n \n \n .single_value() \n \n Returns the value of the first column of the first row of results - but only if the query returned a single row with a single column. Raises a datasette.database.MultipleValues exception otherwise. \n \n \n \n .__len__() \n \n Calling len(results) returns the (truncated) number of returned results.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#row-objects\", \"label\": \"Row objects\"}]"} {"id": "internals:datasette-absolute-url", "page": "internals", "ref": "datasette-absolute-url", "title": ".absolute_url(request, path)", "content": "request - Request \n \n The current Request object \n \n \n \n path - string \n \n A path, for example /dbname/table.json \n \n \n \n Returns the absolute URL for the given path, including the protocol and host. For example: \n absolute_url = datasette.absolute_url(\n request, \"/dbname/table.json\"\n)\n# Would return \"http://localhost:8001/dbname/table.json\" \n The current request object is used to determine the hostname and protocol that should be used for the returned URL. The force_https_urls configuration setting is taken into account.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-actors-from-ids", "page": "internals", "ref": "datasette-actors-from-ids", "title": "await .actors_from_ids(actor_ids)", "content": "actor_ids - list of strings or integers \n \n A list of actor IDs to look up. \n \n \n \n Returns a dictionary, where the keys are the IDs passed to it and the values are the corresponding actor dictionaries. \n This method is mainly designed to be used with plugins. See the actors_from_ids(datasette, actor_ids) documentation for details. \n If no plugins that implement that hook are installed, the default return value looks like this: \n {\n \"1\": {\"id\": \"1\"},\n \"2\": {\"id\": \"2\"}\n}", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-add-database", "page": "internals", "ref": "datasette-add-database", "title": ".add_database(db, name=None, route=None)", "content": "db - datasette.database.Database instance \n \n The database to be attached. \n \n \n \n name - string, optional \n \n The name to be used for this database . If not specified Datasette will pick one based on the filename or memory name. \n \n \n \n route - string, optional \n \n This will be used in the URL path. If not specified, it will default to the same thing as the name . \n \n \n \n The datasette.add_database(db) method lets you add a new database to the current Datasette instance. \n The db parameter should be an instance of the datasette.database.Database class. For example: \n from datasette.database import Database\n\ndatasette.add_database(\n Database(\n datasette,\n path=\"path/to/my-new-database.db\",\n )\n) \n This will add a mutable database and serve it at /my-new-database . \n Use is_mutable=False to add an immutable database. \n .add_database() returns the Database instance, with its name set as the database.name attribute. Any time you are working with a newly added database you should use the return value of .add_database() , for example: \n db = datasette.add_database(\n Database(datasette, memory_name=\"statistics\")\n)\nawait db.execute_write(\n \"CREATE TABLE foo(id integer primary key)\"\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-add-memory-database", "page": "internals", "ref": "datasette-add-memory-database", "title": ".add_memory_database(name)", "content": "Adds a shared in-memory database with the specified name: \n datasette.add_memory_database(\"statistics\") \n This is a shortcut for the following: \n from datasette.database import Database\n\ndatasette.add_database(\n Database(datasette, memory_name=\"statistics\")\n) \n Using either of these pattern will result in the in-memory database being served at /statistics .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-add-message", "page": "internals", "ref": "datasette-add-message", "title": ".add_message(request, message, type=datasette.INFO)", "content": "request - Request \n \n The current Request object \n \n \n \n message - string \n \n The message string \n \n \n \n type - constant, optional \n \n The message type - datasette.INFO , datasette.WARNING or datasette.ERROR \n \n \n \n Datasette's flash messaging mechanism allows you to add a message that will be displayed to the user on the next page that they visit. Messages are persisted in a ds_messages cookie. This method adds a message to that cookie. \n You can try out these messages (including the different visual styling of the three message types) using the /-/messages debugging tool.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-check-visibility", "page": "internals", "ref": "datasette-check-visibility", "title": "await .check_visibility(actor, action=None, resource=None, permissions=None)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string, optional \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n permissions - list of action strings or (action, resource) tuples, optional \n \n Provide this instead of action and resource to check multiple permissions at once. \n \n \n \n This convenience method can be used to answer the question \"should this item be considered private, in that it is visible to me but it is not visible to anonymous users?\" \n It returns a tuple of two booleans, (visible, private) . visible indicates if the actor can see this resource. private will be True if an anonymous user would not be able to view the resource. \n This example checks if the user can access a specific table, and sets private so that a padlock icon can later be displayed: \n visible, private = await datasette.check_visibility(\n request.actor,\n action=\"view-table\",\n resource=(database, table),\n) \n The following example runs three checks in a row, similar to await .ensure_permissions(actor, permissions) . If any of the checks are denied before one of them is explicitly granted then visible will be False . private will be True if an anonymous user would not be able to view the resource. \n visible, private = await datasette.check_visibility(\n request.actor,\n permissions=[\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-create-token", "page": "internals", "ref": "datasette-create-token", "title": ".create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None)", "content": "actor_id - string \n \n The ID of the actor to create a token for. \n \n \n \n expires_after - int, optional \n \n The number of seconds after which the token should expire. \n \n \n \n restrict_all - iterable, optional \n \n A list of actions that this token should be restricted to across all databases and resources. \n \n \n \n restrict_database - dict, optional \n \n For restricting actions within specific databases, e.g. {\"mydb\": [\"view-table\", \"view-query\"]} . \n \n \n \n restrict_resource - dict, optional \n \n For restricting actions to specific resources (tables, SQL views and Canned queries ) within a database. For example: {\"mydb\": {\"mytable\": [\"insert-row\", \"update-row\"]}} . \n \n \n \n This method returns a signed API token of the format dstok_... which can be used to authenticate requests to the Datasette API. \n All tokens must have an actor_id string indicating the ID of the actor which the token will act on behalf of. \n Tokens default to lasting forever, but can be set to expire after a given number of seconds using the expires_after argument. The following code creates a token for user1 that will expire after an hour: \n token = datasette.create_token(\n actor_id=\"user1\",\n expires_after=3600,\n) \n The three restrict_* arguments can be used to create a token that has additional restrictions beyond what the associated actor is allowed to do. \n The following example creates a token that can access view-instance and view-table across everything, can additionally use view-query for anything in the docs database and is allowed to execute insert-row and update-row in the attachments table in that database: \n token = datasette.create_token(\n actor_id=\"user1\",\n restrict_all=(\"view-instance\", \"view-table\"),\n restrict_database={\"docs\": (\"view-query\",)},\n restrict_resource={\n \"docs\": {\n \"attachments\": (\"insert-row\", \"update-row\")\n }\n },\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-databases", "page": "internals", "ref": "datasette-databases", "title": ".databases", "content": "Property exposing a collections.OrderedDict of databases currently connected to Datasette. \n The dictionary keys are the name of the database that is used in the URL - e.g. /fixtures would have a key of \"fixtures\" . The values are Database class instances. \n All databases are listed, irrespective of user permissions.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-ensure-permissions", "page": "internals", "ref": "datasette-ensure-permissions", "title": "await .ensure_permissions(actor, permissions)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n permissions - list \n \n A list of permissions to check. Each permission in that list can be a string action name or a 2-tuple of (action, resource) . \n \n \n \n This method allows multiple permissions to be checked at once. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted. \n This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False : \n await datasette.ensure_permissions(\n request.actor,\n [\n (\"view-table\", (database, table)),\n (\"view-database\", database),\n \"view-instance\",\n ],\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-get-database", "page": "internals", "ref": "datasette-get-database", "title": ".get_database(name)", "content": "name - string, optional \n \n The name of the database - optional. \n \n \n \n Returns the specified database object. Raises a KeyError if the database does not exist. Call this method without an argument to return the first connected database.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-get-permission", "page": "internals", "ref": "datasette-get-permission", "title": ".get_permission(name_or_abbr)", "content": "name_or_abbr - string \n \n The name or abbreviation of the permission to look up, e.g. view-table or vt . \n \n \n \n Returns a Permission object representing the permission, or raises a KeyError if one is not found.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-permission-allowed", "page": "internals", "ref": "datasette-permission-allowed", "title": "await .permission_allowed(actor, action, resource=None, default=...)", "content": "actor - dictionary \n \n The authenticated actor. This is usually request.actor . \n \n \n \n action - string \n \n The name of the action that is being permission checked. \n \n \n \n resource - string or tuple, optional \n \n The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource. \n \n \n \n default - optional: True, False or None \n \n What value should be returned by default if nothing provides an opinion on this permission check.\n Set to True for default allow or False for default deny.\n If not specified the default from the Permission() tuple that was registered using register_permissions(datasette) will be used. \n \n \n \n Check if the given actor has permission to perform the given action on the given resource. \n Some permission checks are carried out against rules defined in datasette.yaml , while other custom permissions may be decided by plugins that implement the permission_allowed(datasette, actor, action, resource) plugin hook. \n If neither metadata.json nor any of the plugins provide an answer to the permission query the default argument will be returned. \n See Built-in permissions for a full list of permission actions included in Datasette core.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-permissions", "page": "internals", "ref": "datasette-permissions", "title": ".permissions", "content": "Property exposing a dictionary of permissions that have been registered using the register_permissions(datasette) plugin hook. \n The dictionary keys are the permission names - e.g. view-instance - and the values are Permission() objects describing the permission. Here is a description of that object .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-plugin-config", "page": "internals", "ref": "datasette-plugin-config", "title": ".plugin_config(plugin_name, database=None, table=None)", "content": "plugin_name - string \n \n The name of the plugin to look up configuration for. Usually this is something similar to datasette-cluster-map . \n \n \n \n database - None or string \n \n The database the user is interacting with. \n \n \n \n table - None or string \n \n The table the user is interacting with. \n \n \n \n This method lets you read plugin configuration values that were set in datasette.yaml . See Writing plugins that accept configuration for full details of how this method should be used. \n The return value will be the value from the configuration file - usually a dictionary. \n If the plugin is not configured the return value will be None .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-remove-database", "page": "internals", "ref": "datasette-remove-database", "title": ".remove_database(name)", "content": "name - string \n \n The name of the database to be removed. \n \n \n \n This removes a database that has been previously added. name= is the unique name of that database.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-render-template", "page": "internals", "ref": "datasette-render-template", "title": "await .render_template(template, context=None, request=None)", "content": "template - string, list of strings or jinja2.Template \n \n The template file to be rendered, e.g. my_plugin.html . Datasette will search for this file first in the --template-dir= location, if it was specified - then in the plugin's bundled templates and finally in Datasette's set of default templates. \n If this is a list of template file names then the first one that exists will be loaded and rendered. \n If this is a Jinja Template object it will be used directly. \n \n \n \n context - None or a Python dictionary \n \n The context variables to pass to the template. \n \n \n \n request - request object or None \n \n If you pass a Datasette request object here it will be made available to the template. \n \n \n \n Renders a Jinja template using Datasette's preconfigured instance of Jinja and returns the resulting string. The template will have access to Datasette's default template functions and any functions that have been made available by other plugins.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[{\"href\": \"https://jinja.palletsprojects.com/en/2.11.x/api/#jinja2.Template\", \"label\": \"Template object\"}, {\"href\": \"https://jinja.palletsprojects.com/en/2.11.x/\", \"label\": \"Jinja template\"}]"} {"id": "internals:datasette-resolve-database", "page": "internals", "ref": "datasette-resolve-database", "title": ".resolve_database(request)", "content": "request - Request object \n \n A request object \n \n \n \n If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a database named group, you can use this method to resolve the database object. \n This returns a Database instance. \n If the database cannot be found, it raises a datasette.utils.asgi.DatabaseNotFound exception - which is a subclass of datasette.utils.asgi.NotFound with a .database_name attribute set to the name of the database that was requested.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-resolve-row", "page": "internals", "ref": "datasette-resolve-row", "title": ".resolve_row(request)", "content": "request - Request object \n \n A request object \n \n \n \n This method assumes your route declares named groups for database , table and pks . \n It returns a ResolvedRow named tuple instance with the following fields: \n \n \n db - Database \n \n The database object \n \n \n \n table - string \n \n The name of the table \n \n \n \n sql - string \n \n SQL snippet that can be used in a WHERE clause to select the row \n \n \n \n params - dict \n \n Parameters that should be passed to the SQL query \n \n \n \n pks - list \n \n List of primary key column names \n \n \n \n pk_values - list \n \n List of primary key values decoded from the URL \n \n \n \n row - sqlite3.Row \n \n The row itself \n \n \n \n If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. \n If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception. \n If the row cannot be found it raises a datasette.utils.asgi.RowNotFound exception. This has .database_name , .table and .pk_values attributes, extracted from the request path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-resolve-table", "page": "internals", "ref": "datasette-resolve-table", "title": ".resolve_table(request)", "content": "request - Request object \n \n A request object \n \n \n \n This assumes that the regular expression for your route declares both a database and a table named group. \n It returns a ResolvedTable named tuple instance with the following fields: \n \n \n db - Database \n \n The database object \n \n \n \n table - string \n \n The name of the table (or view) \n \n \n \n is_view - boolean \n \n True if this is a view, False if it is a table \n \n \n \n If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception. \n If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception - a subclass of datasette.utils.asgi.NotFound with .database_name and .table attributes.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"} {"id": "internals:datasette-setting", "page": "internals", "ref": "datasette-setting", "title": ".setting(key)", "content": "key - string \n \n The name of the setting, e.g. base_url . \n \n \n \n Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting. \n For example: \n downloads_are_allowed = datasette.setting(\"allow_download\")", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}