{"id": "authentication:authentication", "page": "authentication", "ref": "authentication", "title": "Authentication and permissions", "content": "Datasette doesn't require authentication by default. Any visitor to a Datasette instance can explore the full data and execute read-only SQL queries. \n Datasette's plugin system can be used to add many different styles of authentication, such as user accounts, single sign-on or API keys.", "breadcrumbs": "[]", "references": "[]"} {"id": "binary_data:binary", "page": "binary_data", "ref": "binary", "title": "Binary data", "content": "SQLite tables can contain binary data in BLOB columns. \n Datasette includes special handling for these binary values. The Datasette interface detects binary values and provides a link to download their content, for example on https://latest.datasette.io/fixtures/binary_data \n \n Binary data is represented in .json exports using Base64 encoding. \n https://latest.datasette.io/fixtures/binary_data.json?_shape=array \n [\n {\n \"rowid\": 1,\n \"data\": {\n \"$base64\": true,\n \"encoded\": \"FRwCx60F/g==\"\n }\n },\n {\n \"rowid\": 2,\n \"data\": {\n \"$base64\": true,\n \"encoded\": \"FRwDx60F/g==\"\n }\n },\n {\n \"rowid\": 3,\n \"data\": null\n }\n]", "breadcrumbs": "[]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/binary_data\", \"label\": \"https://latest.datasette.io/fixtures/binary_data\"}, {\"href\": \"https://latest.datasette.io/fixtures/binary_data.json?_shape=array\", \"label\": \"https://latest.datasette.io/fixtures/binary_data.json?_shape=array\"}]"} {"id": "changelog:id1", "page": "changelog", "ref": "id1", "title": "Changelog", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "cli-reference:id1", "page": "cli-reference", "ref": "id1", "title": "CLI reference", "content": "The datasette CLI tool provides a number of commands. \n Running datasette without specifying a command runs the default command, datasette serve . See datasette serve for the full list of options for that command. \n [[[cog\nfrom datasette import cli\nfrom click.testing import CliRunner\nimport textwrap\ndef help(args):\n title = \"datasette \" + \" \".join(args)\n cog.out(\"\\n::\\n\\n\")\n result = CliRunner().invoke(cli.cli, args)\n output = result.output.replace(\"Usage: cli \", \"Usage: datasette \")\n cog.out(textwrap.indent(output, ' '))\n cog.out(\"\\n\\n\") \n ]]] \n [[[end]]]", "breadcrumbs": "[]", "references": "[]"} {"id": "configuration:id1", "page": "configuration", "ref": "id1", "title": "Configuration", "content": "Datasette offers several ways to configure your Datasette instances: server settings, plugin configuration, authentication, and more. \n Most configuration can be handled using a datasette.yaml configuration file, passed to datasette using the -c/--config flag: \n datasette mydatabase.db --config datasette.yaml \n This file can also use JSON, as datasette.json . YAML is recommended over JSON due to its support for comments and multi-line strings.", "breadcrumbs": "[]", "references": "[]"} {"id": "contributing:id1", "page": "contributing", "ref": "id1", "title": "Contributing", "content": "Datasette is an open source project. We welcome contributions! \n This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins .", "breadcrumbs": "[]", "references": "[]"} {"id": "csv_export:id1", "page": "csv_export", "ref": "id1", "title": "CSV export", "content": "Any Datasette table, view or custom SQL query can be exported as CSV. \n To obtain the CSV representation of the table you are looking, click the \"this\n data as CSV\" link. \n You can also use the advanced export form for more control over the resulting\n file, which looks like this and has the following options: \n \n \n \n download file - instead of displaying CSV in your browser, this forces\n your browser to download the CSV to your downloads directory. \n \n \n expand labels - if your table has any foreign key references this option\n will cause the CSV to gain additional COLUMN_NAME_label columns with a\n label for each foreign key derived from the linked table. In this example \n the city_id column is accompanied by a city_id_label column. \n \n \n stream all rows - by default CSV files only contain the first\n max_returned_rows records. This option will cause Datasette to\n loop through every matching record and return them as a single CSV file. \n \n \n You can try that out on https://latest.datasette.io/fixtures/facetable?_size=4", "breadcrumbs": "[]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/facetable.csv?_labels=on&_size=max\", \"label\": \"In this example\"}, {\"href\": \"https://latest.datasette.io/fixtures/facetable?_size=4\", \"label\": \"https://latest.datasette.io/fixtures/facetable?_size=4\"}]"} {"id": "custom_templates:customization", "page": "custom_templates", "ref": "customization", "title": "Custom pages and templates", "content": "Datasette provides a number of ways of customizing the way data is displayed.", "breadcrumbs": "[]", "references": "[]"} {"id": "deploying:deploying", "page": "deploying", "ref": "deploying", "title": "Deploying Datasette", "content": "The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel. \n You can deploy Datasette to other hosting providers using the instructions on this page.", "breadcrumbs": "[]", "references": "[]"} {"id": "ecosystem:ecosystem", "page": "ecosystem", "ref": "ecosystem", "title": "The Datasette Ecosystem", "content": "Datasette sits at the center of a growing ecosystem of open source tools aimed at making it as easy as possible to gather, analyze and publish interesting data. \n These tools are divided into two main groups: tools for building SQLite databases (for use with Datasette) and plugins that extend Datasette's functionality. \n The Datasette project website includes a directory of plugins and a directory of tools: \n \n \n Plugins directory on datasette.io \n \n \n Tools directory on datasette.io", "breadcrumbs": "[]", "references": "[{\"href\": \"https://datasette.io/\", \"label\": \"Datasette project website\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Plugins directory on datasette.io\"}, {\"href\": \"https://datasette.io/tools\", \"label\": \"Tools directory on datasette.io\"}]"} {"id": "events:id1", "page": "events", "ref": "id1", "title": "Events", "content": "Datasette includes a mechanism for tracking events that occur while the software is running. This is primarily intended to be used by plugins, which can both trigger events and listen for events. \n The core Datasette application triggers events when certain things happen. This page describes those events. \n Plugins can listen for events using the track_event(datasette, event) plugin hook, which will be called with instances of the following classes - or additional classes registered by other plugins . \n \n \n \n \n class datasette.events. LoginEvent actor : dict | None \n \n Event name: login \n A user (represented by event.actor ) has logged in. \n \n \n \n \n class datasette.events. LogoutEvent actor : dict | None \n \n Event name: logout \n A user (represented by event.actor ) has logged out. \n \n \n \n \n class datasette.events. CreateTokenEvent actor : dict | None expires_after : int | None restrict_all : list restrict_database : dict restrict_resource : dict \n \n Event name: create-token \n A user created an API token. \n \n \n Variables \n \n \n \n expires_after -- Number of seconds after which this token will expire. \n \n \n restrict_all -- Restricted permissions for this token. \n \n \n restrict_database -- Restricted database permissions for this token. \n \n \n restrict_resource -- Restricted resource permissions for this token. \n \n \n \n \n \n \n \n \n \n class datasette.events. CreateTableEvent actor : dict | None database : str table : str schema : str \n \n Event name: create-table \n A new table has been created in the database. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was created. \n \n \n table -- The name of the table that was created \n \n \n schema -- The SQL schema definition for the new table. \n \n \n \n \n \n \n \n \n \n class datasette.events. DropTableEvent actor : dict | None database : str table : str \n \n Event name: drop-table \n A table has been dropped from the database. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was dropped. \n \n \n table -- The name of the table that was dropped \n \n \n \n \n \n \n \n \n \n class datasette.events. AlterTableEvent actor : dict | None database : str table : str before_schema : str after_schema : str \n \n Event name: alter-table \n A table has been altered. \n \n \n Variables \n \n \n \n database -- The name of the database where the table was altered \n \n \n table -- The name of the table that was altered \n \n \n before_schema -- The table's SQL schema before the alteration \n \n \n after_schema -- The table's SQL schema after the alteration \n \n \n \n \n \n \n \n \n \n class datasette.events. InsertRowsEvent actor : dict | None database : str table : str num_rows : int ignore : bool replace : bool \n \n Event name: insert-rows \n Rows were inserted into a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the rows were inserted. \n \n \n table -- The name of the table where the rows were inserted. \n \n \n num_rows -- The number of rows that were requested to be inserted. \n \n \n ignore -- Was ignore set? \n \n \n replace -- Was replace set? \n \n \n \n \n \n \n \n \n \n class datasette.events. UpsertRowsEvent actor : dict | None database : str table : str num_rows : int \n \n Event name: upsert-rows \n Rows were upserted into a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the rows were inserted. \n \n \n table -- The name of the table where the rows were inserted. \n \n \n num_rows -- The number of rows that were requested to be inserted. \n \n \n \n \n \n \n \n \n \n class datasette.events. UpdateRowEvent actor : dict | None database : str table : str pks : list \n \n Event name: update-row \n A row was updated in a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the row was updated. \n \n \n table -- The name of the table where the row was updated. \n \n \n pks -- The primary key values of the updated row. \n \n \n \n \n \n \n \n \n \n class datasette.events. DeleteRowEvent actor : dict | None database : str table : str pks : list \n \n Event name: delete-row \n A row was deleted from a table. \n \n \n Variables \n \n \n \n database -- The name of the database where the row was deleted. \n \n \n table -- The name of the table where the row was deleted. \n \n \n pks -- The primary key values of the deleted row.", "breadcrumbs": "[]", "references": "[]"} {"id": "facets:id1", "page": "facets", "ref": "id1", "title": "Facets", "content": "Datasette facets can be used to add a faceted browse interface to any database table.\n With facets, tables are displayed along with a summary showing the most common values in specified columns.\n These values can be selected to further filter the table. \n Here's an example : \n \n Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://congress-legislators.datasettes.com/legislators/legislator_terms?_facet=type&_facet=party&_facet=state&_facet_size=10\", \"label\": \"an example\"}]"} {"id": "full_text_search:id1", "page": "full_text_search", "ref": "id1", "title": "Full-text search", "content": "SQLite includes a powerful mechanism for enabling full-text search against SQLite records. Datasette can detect if a table has had full-text search configured for it in the underlying database and display a search interface for filtering that table. \n Here's an example search : \n \n Datasette automatically detects which tables have been configured for full-text search.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://www.sqlite.org/fts3.html\", \"label\": \"a powerful mechanism for enabling full-text search\"}, {\"href\": \"https://register-of-members-interests.datasettes.com/regmem/items?_search=hamper&_sort_desc=date\", \"label\": \"an example search\"}]"} {"id": "getting_started:getting-started", "page": "getting_started", "ref": "getting-started", "title": "Getting started", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "index:datasette", "page": "index", "ref": "datasette", "title": "Datasette", "content": "An open source multi-tool for exploring and publishing data \n Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API. \n Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world. It is part of a wider ecosystem of tools and plugins dedicated to making working with structured data as productive as possible. \n Explore a demo , watch a presentation about the project or Try Datasette without installing anything using Glitch . \n Interested in learning Datasette? Start with the official tutorials . \n Support questions, feedback? Join the Datasette Discord .", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://docs.datasette.io/en/stable/changelog.html\", \"label\": null}, {\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/actions?query=workflow%3ATest\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/blob/main/LICENSE\", \"label\": null}, {\"href\": \"https://hub.docker.com/r/datasetteproject/datasette\", \"label\": null}, {\"href\": \"https://datasette.io/discord\", \"label\": null}, {\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://docs.datasette.io/en/stable/changelog.html\", \"label\": null}, {\"href\": \"https://pypi.org/project/datasette/\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/actions?query=workflow%3ATest\", \"label\": null}, {\"href\": \"https://github.com/simonw/datasette/blob/main/LICENSE\", \"label\": null}, {\"href\": \"https://hub.docker.com/r/datasetteproject/datasette\", \"label\": null}, {\"href\": \"https://datasette.io/discord\", \"label\": null}, {\"href\": \"https://fivethirtyeight.datasettes.com/fivethirtyeight\", \"label\": \"Explore a demo\"}, {\"href\": \"https://static.simonwillison.net/static/2018/pybay-datasette/\", \"label\": \"a presentation about the project\"}, {\"href\": \"https://datasette.io/tutorials\", \"label\": \"the official tutorials\"}, {\"href\": \"https://datasette.io/discord\", \"label\": \"Datasette Discord\"}]"} {"id": "installation:id1", "page": "installation", "ref": "id1", "title": "Installation", "content": "If you just want to try Datasette out you don't need to install anything: see Try Datasette without installing anything using Glitch \n \n There are two main options for installing Datasette. You can install it directly on to your machine, or you can install it using Docker. \n If you want to start making contributions to the Datasette project by installing a copy that lets you directly modify the code, take a look at our guide to Setting up a development environment . \n \n \n \n Basic installation \n \n \n Datasette Desktop for Mac \n \n \n Using Homebrew \n \n \n Using pip \n \n \n \n \n Advanced installation options \n \n \n Using pipx \n \n \n Installing plugins using pipx \n \n \n Upgrading packages using pipx \n \n \n \n \n Using Docker \n \n \n Loading SpatiaLite \n \n \n Installing plugins \n \n \n \n \n \n \n A note about extensions", "breadcrumbs": "[]", "references": "[]"} {"id": "internals:internals", "page": "internals", "ref": "internals", "title": "Internals for plugins", "content": "Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.", "breadcrumbs": "[]", "references": "[]"} {"id": "introspection:id1", "page": "introspection", "ref": "id1", "title": "Introspection", "content": "Datasette includes some pages and JSON API endpoints for introspecting the current instance. These can be used to understand some of the internals of Datasette and to see how a particular instance has been configured. \n Each of these pages can be viewed in your browser. Add .json to the URL to get back the contents as JSON.", "breadcrumbs": "[]", "references": "[]"} {"id": "javascript_plugins:id1", "page": "javascript_plugins", "ref": "id1", "title": "JavaScript plugins", "content": "Datasette can run custom JavaScript in several different ways: \n \n \n Datasette plugins written in Python can use the extra_js_urls() or extra_body_script() plugin hooks to inject JavaScript into a page \n \n \n Datasette instances with custom templates can include additional JavaScript in those templates \n \n \n The extra_js_urls key in datasette.yaml can be used to include extra JavaScript \n \n \n There are no limitations on what this JavaScript can do. It is executed directly by the browser, so it can manipulate the DOM, fetch additional data and do anything else that JavaScript is capable of. \n \n Custom JavaScript has security implications, especially for authenticated Datasette instances where the JavaScript might run in the context of the authenticated user. It's important to carefully review any JavaScript you run in your Datasette instance.", "breadcrumbs": "[]", "references": "[]"} {"id": "json_api:id1", "page": "json_api", "ref": "id1", "title": "JSON API", "content": "Datasette provides a JSON API for your SQLite databases. Anything you can do\n through the Datasette user interface can also be accessed as JSON via the API. \n To access the API for a page, either click on the .json link on that page or\n edit the URL and add a .json extension to it.", "breadcrumbs": "[]", "references": "[]"} {"id": "metadata:id1", "page": "metadata", "ref": "id1", "title": "Metadata", "content": "Data loves metadata. Any time you run Datasette you can optionally include a\n YAML or JSON file with metadata about your databases and tables. Datasette will then\n display that information in the web UI. \n Run Datasette like this: \n datasette database1.db database2.db --metadata metadata.yaml \n Your metadata.yaml file can look something like this: \n [[[cog\nfrom metadata_doc import metadata_example\nmetadata_example(cog, {\n \"title\": \"Custom title for your index page\",\n \"description\": \"Some description text can go here\",\n \"license\": \"ODbL\",\n \"license_url\": \"https://opendatacommons.org/licenses/odbl/\",\n \"source\": \"Original Data Source\",\n \"source_url\": \"http://example.com/\"\n}) \n ]]] \n [[[end]]] \n Choosing YAML over JSON adds support for multi-line strings and comments. \n The above metadata will be displayed on the index page of your Datasette-powered\n site. The source and license information will also be included in the footer of\n every page served by Datasette. \n Any special HTML characters in description will be escaped. If you want to\n include HTML in your description, you can use a description_html property\n instead.", "breadcrumbs": "[]", "references": "[]"} {"id": "pages:pages", "page": "pages", "ref": "pages", "title": "Pages and API endpoints", "content": "The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API.", "breadcrumbs": "[]", "references": "[]"} {"id": "performance:performance", "page": "performance", "ref": "performance", "title": "Performance and caching", "content": "Datasette runs on top of SQLite, and SQLite has excellent performance. For small databases almost any query should return in just a few milliseconds, and larger databases (100s of MBs or even GBs of data) should perform extremely well provided your queries make sensible use of database indexes. \n That said, there are a number of tricks you can use to improve Datasette's performance.", "breadcrumbs": "[]", "references": "[]"} {"id": "plugin_hooks:id1", "page": "plugin_hooks", "ref": "id1", "title": "Plugin hooks", "content": "Datasette plugins use plugin hooks to customize Datasette's behavior. These hooks are powered by the pluggy plugin system. \n Each plugin can implement one or more hooks using the @hookimpl decorator against a function named that matches one of the hooks documented on this page. \n When you implement a plugin hook you can accept any or all of the parameters that are documented as being passed to that hook. \n For example, you can implement the render_cell plugin hook like this even though the full documented hook signature is render_cell(row, value, column, table, database, datasette) : \n @hookimpl\ndef render_cell(value, column):\n if column == \"stars\":\n return \"*\" * int(value) \n \n List of plugin hooks \n \n \n prepare_connection(conn, database, datasette) \n \n \n prepare_jinja2_environment(env, datasette) \n \n \n Page extras \n \n \n extra_template_vars(template, database, table, columns, view_name, request, datasette) \n \n \n extra_css_urls(template, database, table, columns, view_name, request, datasette) \n \n \n extra_js_urls(template, database, table, columns, view_name, request, datasette) \n \n \n extra_body_script(template, database, table, columns, view_name, request, datasette) \n \n \n \n \n publish_subcommand(publish) \n \n \n render_cell(row, value, column, table, database, datasette, request) \n \n \n register_output_renderer(datasette) \n \n \n register_routes(datasette) \n \n \n register_commands(cli) \n \n \n register_facet_classes() \n \n \n register_permissions(datasette) \n \n \n asgi_wrapper(datasette) \n \n \n startup(datasette) \n \n \n canned_queries(datasette, database, actor) \n \n \n actor_from_request(datasette, request) \n \n \n actors_from_ids(datasette, actor_ids) \n \n \n jinja2_environment_from_request(datasette, request, env) \n \n \n filters_from_request(request, database, table, datasette) \n \n \n permission_allowed(datasette, actor, action, resource) \n \n \n register_magic_parameters(datasette) \n \n \n forbidden(datasette, request, message) \n \n \n handle_exception(datasette, request, exception) \n \n \n skip_csrf(datasette, scope) \n \n \n get_metadata(datasette, key, database, table) \n \n \n menu_links(datasette, actor, request) \n \n \n Action hooks \n \n \n table_actions(datasette, actor, database, table, request) \n \n \n view_actions(datasette, actor, database, view, request) \n \n \n query_actions(datasette, actor, database, query_name, request, sql, params) \n \n \n row_actions(datasette, actor, request, database, table, row) \n \n \n database_actions(datasette, actor, database, request) \n \n \n homepage_actions(datasette, actor, request) \n \n \n \n \n Template slots \n \n \n top_homepage(datasette, request) \n \n \n top_database(datasette, request, database) \n \n \n top_table(datasette, request, database, table) \n \n \n top_row(datasette, request, database, table, row) \n \n \n top_query(datasette, request, database, sql) \n \n \n top_canned_query(datasette, request, database, query_name) \n \n \n \n \n Event tracking \n \n \n track_event(datasette, event) \n \n \n register_events(datasette)", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pluggy.readthedocs.io/\", \"label\": \"pluggy\"}]"} {"id": "plugins:id1", "page": "plugins", "ref": "id1", "title": "Plugins", "content": "Datasette's plugin system allows additional features to be implemented as Python\n code (or front-end JavaScript) which can be wrapped up in a separate Python\n package. The underlying mechanism uses pluggy . \n See the Datasette plugins directory for a list of existing plugins, or take a look at the\n datasette-plugin topic on GitHub. \n Things you can do with plugins include: \n \n \n Add visualizations to Datasette, for example\n datasette-cluster-map and\n datasette-vega . \n \n \n Make new custom SQL functions available for use within Datasette, for example\n datasette-haversine and\n datasette-jellyfish . \n \n \n Define custom output formats with custom extensions, for example datasette-atom and\n datasette-ics . \n \n \n Add template functions that can be called within your Jinja custom templates,\n for example datasette-render-markdown . \n \n \n Customize how database values are rendered in the Datasette interface, for example\n datasette-render-binary and\n datasette-pretty-json . \n \n \n Customize how Datasette's authentication and permissions systems work, for example datasette-auth-passwords and\n datasette-permissions-sql .", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pluggy.readthedocs.io/\", \"label\": \"pluggy\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Datasette plugins directory\"}, {\"href\": \"https://github.com/topics/datasette-plugin\", \"label\": \"datasette-plugin\"}, {\"href\": \"https://github.com/simonw/datasette-cluster-map\", \"label\": \"datasette-cluster-map\"}, {\"href\": \"https://github.com/simonw/datasette-vega\", \"label\": \"datasette-vega\"}, {\"href\": \"https://github.com/simonw/datasette-haversine\", \"label\": \"datasette-haversine\"}, {\"href\": \"https://github.com/simonw/datasette-jellyfish\", \"label\": \"datasette-jellyfish\"}, {\"href\": \"https://github.com/simonw/datasette-atom\", \"label\": \"datasette-atom\"}, {\"href\": \"https://github.com/simonw/datasette-ics\", \"label\": \"datasette-ics\"}, {\"href\": \"https://github.com/simonw/datasette-render-markdown#markdown-in-templates\", \"label\": \"datasette-render-markdown\"}, {\"href\": \"https://github.com/simonw/datasette-render-binary\", \"label\": \"datasette-render-binary\"}, {\"href\": \"https://github.com/simonw/datasette-pretty-json\", \"label\": \"datasette-pretty-json\"}, {\"href\": \"https://github.com/simonw/datasette-auth-passwords\", \"label\": \"datasette-auth-passwords\"}, {\"href\": \"https://github.com/simonw/datasette-permissions-sql\", \"label\": \"datasette-permissions-sql\"}]"} {"id": "publish:publishing", "page": "publish", "ref": "publishing", "title": "Publishing data", "content": "Datasette includes tools for publishing and deploying your data to the internet. The datasette publish command will deploy a new Datasette instance containing your databases directly to a Heroku or Google Cloud hosting account. You can also use datasette package to create a Docker image that bundles your databases together with the datasette application that is used to serve them.", "breadcrumbs": "[]", "references": "[]"} {"id": "settings:id1", "page": "settings", "ref": "id1", "title": "Settings", "content": "", "breadcrumbs": "[]", "references": "[]"} {"id": "spatialite:id1", "page": "spatialite", "ref": "id1", "title": "SpatiaLite", "content": "The SpatiaLite module for SQLite adds features for handling geographic and spatial data. For an example of what you can do with it, see the tutorial Building a location to time zone API with SpatiaLite . \n To use it with Datasette, you need to install the mod_spatialite dynamic library. This can then be loaded into Datasette using the --load-extension command-line option. \n Datasette can look for SpatiaLite in common installation locations if you run it like this: \n datasette --load-extension=spatialite --setting default_allow_sql off \n If SpatiaLite is in another location, use the full path to the extension instead: \n datasette --setting default_allow_sql off \\\n --load-extension=/usr/local/lib/mod_spatialite.dylib", "breadcrumbs": "[]", "references": "[{\"href\": \"https://www.gaia-gis.it/fossil/libspatialite/index\", \"label\": \"SpatiaLite module\"}, {\"href\": \"https://datasette.io/tutorials/spatialite\", \"label\": \"Building a location to time zone API with SpatiaLite\"}]"} {"id": "sql_queries:sql", "page": "sql_queries", "ref": "sql", "title": "Running SQL queries", "content": "Datasette treats SQLite database files as read-only and immutable. This means it is not possible to execute INSERT or UPDATE statements using Datasette, which allows us to expose SELECT statements to the outside world without needing to worry about SQL injection attacks. \n The easiest way to execute custom SQL against Datasette is through the web UI. The database index page includes a SQL editor that lets you run any SELECT query you like. You can also construct queries using the filter interface on the tables page, then click \"View and edit SQL\" to open that query in the custom SQL editor. \n Note that this interface is only available if the execute-sql permission is allowed. See Controlling the ability to execute arbitrary SQL . \n Any Datasette SQL query is reflected in the URL of the page, allowing you to bookmark them, share them with others and navigate through previous queries using your browser back button. \n You can also retrieve the results of any query as JSON by adding .json to the base URL.", "breadcrumbs": "[]", "references": "[]"} {"id": "testing_plugins:id1", "page": "testing_plugins", "ref": "id1", "title": "Testing plugins", "content": "We recommend using pytest to write automated tests for your plugins. \n If you use the template described in Starting an installable plugin using cookiecutter your plugin will start with a single test in your tests/ directory that looks like this: \n from datasette.app import Datasette\nimport pytest\n\n\n@pytest.mark.asyncio\nasync def test_plugin_is_installed():\n datasette = Datasette(memory=True)\n response = await datasette.client.get(\"/-/plugins.json\")\n assert response.status_code == 200\n installed_plugins = {p[\"name\"] for p in response.json()}\n assert (\n \"datasette-plugin-template-demo\"\n in installed_plugins\n ) \n This test uses the datasette.client object to exercise a test instance of Datasette. datasette.client is a wrapper around the HTTPX Python library which can imitate HTTP requests using ASGI. This is the recommended way to write tests against a Datasette instance. \n This test also uses the pytest-asyncio package to add support for async def test functions running under pytest. \n You can install these packages like so: \n pip install pytest pytest-asyncio \n If you are building an installable package you can add them as test dependencies to your setup.py module like this: \n setup(\n name=\"datasette-my-plugin\",\n # ...\n extras_require={\"test\": [\"pytest\", \"pytest-asyncio\"]},\n tests_require=[\"datasette-my-plugin[test]\"],\n) \n You can then install the test dependencies like so: \n pip install -e '.[test]' \n Then run the tests using pytest like so: \n pytest", "breadcrumbs": "[]", "references": "[{\"href\": \"https://docs.pytest.org/\", \"label\": \"pytest\"}, {\"href\": \"https://www.python-httpx.org/\", \"label\": \"HTTPX\"}, {\"href\": \"https://pypi.org/project/pytest-asyncio/\", \"label\": \"pytest-asyncio\"}]"} {"id": "writing_plugins:id1", "page": "writing_plugins", "ref": "id1", "title": "Writing plugins", "content": "You can write one-off plugins that apply to just one Datasette instance, or you can write plugins which can be installed using pip and can be shipped to the Python Package Index ( PyPI ) for other people to install. \n Want to start by looking at an example? The Datasette plugins directory lists more than 90 open source plugins with code you can explore. The plugin hooks page includes links to example plugins for each of the documented hooks.", "breadcrumbs": "[]", "references": "[{\"href\": \"https://pypi.org/\", \"label\": \"PyPI\"}, {\"href\": \"https://datasette.io/plugins\", \"label\": \"Datasette plugins directory\"}]"}