rowid,title,content,sections_fts,rank
101,register_magic_parameters(datasette),"datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
Magic parameters can be used to add automatic parameters to canned queries . This plugin hook allows additional magic parameters to be defined by plugins.
Magic parameters all take this format: _prefix_rest_of_parameter . The prefix indicates which magic parameter function should be called - the rest of the parameter is passed as an argument to that function.
To register a new function, return it as a tuple of (string prefix, function) from this hook. The function you register should take two arguments: key and request , where key is the rest_of_parameter portion of the parameter and request is the current Request object .
This example registers two new magic parameters: :_request_http_version returning the HTTP version of the current request, and :_uuid_new which returns a new UUID:
from datasette import hookimpl
from uuid import uuid4
def uuid(key, request):
if key == ""new"":
return str(uuid4())
else:
raise KeyError
def request(key, request):
if key == ""http_version"":
return request.scope[""http_version""]
else:
raise KeyError
@hookimpl
def register_magic_parameters(datasette):
return [
(""request"", request),
(""uuid"", uuid),
]",736,
102,"forbidden(datasette, request, message)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to render templates or execute SQL queries.
request - Request object
The current HTTP request.
message - string
A message hinting at why the request was forbidden.
Plugins can use this to customize how Datasette responds when a 403 Forbidden error occurs - usually because a page failed a permission check, see Permissions .
If a plugin hook wishes to react to the error, it should return a Response object .
This example returns a redirect to a /-/login page:
from datasette import hookimpl
from urllib.parse import urlencode
@hookimpl
def forbidden(request, message):
return Response.redirect(
""/-/login?="" + urlencode({""message"": message})
)
The function can alternatively return an awaitable function if it needs to make any asynchronous method calls. This example renders a template:
from datasette import hookimpl, Response
@hookimpl
def forbidden(datasette):
async def inner():
return Response.html(
await datasette.render_template(
""render_message.html"", request=request
)
)
return inner",736,
103,"handle_exception(datasette, request, exception)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to render templates or execute SQL queries.
request - Request object
The current HTTP request.
exception - Exception
The exception that was raised.
This hook is called any time an unexpected exception is raised. You can use it to record the exception.
If your handler returns a Response object it will be returned to the client in place of the default Datasette error page.
The handler can return a response directly, or it can return return an awaitable function that returns a response.
This example logs an error to Sentry and then renders a custom error page:
from datasette import hookimpl, Response
import sentry_sdk
@hookimpl
def handle_exception(datasette, exception):
sentry_sdk.capture_exception(exception)
async def inner():
return Response.html(
await datasette.render_template(
""custom_error.html"", request=request
)
)
return inner
Example: datasette-sentry",736,
104,"skip_csrf(datasette, scope)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
scope - dictionary
The ASGI scope for the incoming HTTP request.
This hook can be used to skip CSRF protection for a specific incoming request. For example, you might have a custom path at /submit-comment which is designed to accept comments from anywhere, whether or not the incoming request originated on the site and has an accompanying CSRF token.
This example will disable CSRF protection for that specific URL path:
from datasette import hookimpl
@hookimpl
def skip_csrf(scope):
return scope[""path""] == ""/submit-comment""
If any of the currently active skip_csrf() plugin hooks return True , CSRF protection will be skipped for the request.",736,
105,"get_metadata(datasette, key, database, table)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
actor - dictionary or None
The currently authenticated actor .
database - string or None
The name of the database metadata is being asked for.
table - string or None
The name of the table.
key - string or None
The name of the key for which data is being asked for.
This hook is responsible for returning a dictionary corresponding to Datasette Metadata . This function is passed the database , table and key which were passed to the upstream internal request for metadata. Regardless, it is important to return a global metadata object, where ""databases"": [] would be a top-level key. The dictionary returned here, will be merged with, and overwritten by, the contents of the physical metadata.yaml if one is present.
The design of this plugin hook does not currently provide a mechanism for interacting with async code, and may change in the future. See issue 1384 .
@hookimpl
def get_metadata(datasette, key, database, table):
metadata = {
""title"": ""This will be the Datasette landing page title!"",
""description"": get_instance_description(datasette),
""databases"": [],
}
for db_name, db_data_dict in get_my_database_meta(
datasette, database, table, key
):
metadata[""databases""][db_name] = db_data_dict
# whatever we return here will be merged with any other plugins using this hook and
# will be overwritten by a local metadata.yaml if one exists!
return metadata
Example: datasette-remote-metadata plugin",736,
106,"menu_links(datasette, actor, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
request - Request object or None
The current HTTP request. This can be None if the request object is not available.
This hook allows additional items to be included in the menu displayed by Datasette's top right menu icon.
The hook should return a list of {""href"": ""..."", ""label"": ""...""} menu items. These will be added to the menu.
It can alternatively return an async def awaitable function which returns a list of menu items.
This example adds a new menu item but only if the signed in user is ""root"" :
from datasette import hookimpl
@hookimpl
def menu_links(datasette, actor):
if actor and actor.get(""id"") == ""root"":
return [
{
""href"": datasette.urls.path(
""/-/edit-schema""
),
""label"": ""Edit schema"",
},
]
Using datasette.urls here ensures that links in the menu will take the base_url setting into account.
Examples: datasette-search-all , datasette-graphql",736,
107,Action hooks,"Action hooks can be used to add items to the action menus that appear at the top of different pages within Datasette. Unlike menu_links() , actions which are displayed on every page, actions should only be relevant to the page the user is currently viewing.
Each of these hooks should return return a list of {""href"": ""..."", ""label"": ""...""} menu items, with optional ""description"": ""..."" keys describing each action in more detail.
They can alternatively return an async def awaitable function which, when called, returns a list of those menu items.",736,
108,"table_actions(datasette, actor, database, table, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
table - string
The name of the table.
request - Request object or None
The current HTTP request. This can be None if the request object is not available.
This example adds a new table action if the signed in user is ""root"" :
from datasette import hookimpl
@hookimpl
def table_actions(datasette, actor, database, table):
if actor and actor.get(""id"") == ""root"":
return [
{
""href"": datasette.urls.path(
""/-/edit-schema/{}/{}"".format(
database, table
)
),
""label"": ""Edit schema for this table"",
""description"": ""Add, remove, rename or alter columns for this table."",
}
]
Example: datasette-graphql",736,
109,"view_actions(datasette, actor, database, view, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
view - string
The name of the SQL view.
request - Request object or None
The current HTTP request. This can be None if the request object is not available.
Like table_actions(datasette, actor, database, table, request) but for SQL views.",736,
110,"query_actions(datasette, actor, database, query_name, request, sql, params)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
query_name - string or None
The name of the canned query, or None if this is an arbitrary SQL query.
request - Request object
The current HTTP request.
sql - string
The SQL query being executed
params - dictionary
The parameters passed to the SQL query, if any.
Populates a ""Query actions"" menu on the canned query and arbitrary SQL query pages.
This example adds a new query action linking to a page for explaining a query:
from datasette import hookimpl
import urllib
@hookimpl
def query_actions(datasette, database, query_name, sql):
# Don't explain an explain
if sql.lower().startswith(""explain""):
return
return [
{
""href"": datasette.urls.database(database)
+ ""?""
+ urllib.parse.urlencode(
{
""sql"": ""explain "" + sql,
}
),
""label"": ""Explain this query"",
""description"": ""Get a summary of how SQLite executes the query"",
},
]
Example: datasette-create-view",736,
111,"row_actions(datasette, actor, request, database, table, row)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
request - Request object or None
The current HTTP request.
database - string
The name of the database.
table - string
The name of the table.
row - sqlite.Row
The SQLite row object being displayed on the page.
Return links for the ""Row actions"" menu shown at the top of the row page.
This example displays the row in JSON plus some additional debug information if the user is signed in:
from datasette import hookimpl
@hookimpl
def row_actions(datasette, database, table, actor, row):
if actor:
return [
{
""href"": datasette.urls.instance(),
""label"": f""Row details for {actor['id']}"",
""description"": json.dumps(
dict(row), default=repr
),
},
]
Example: datasette-enrichments",736,
112,"database_actions(datasette, actor, database, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
database - string
The name of the database.
request - Request object
The current HTTP request.
Populates an actions menu on the database page.
This example adds a new database action for creating a table, if the user has the edit-schema permission:
from datasette import hookimpl
@hookimpl
def database_actions(datasette, actor, database):
async def inner():
if not await datasette.permission_allowed(
actor,
""edit-schema"",
resource=database,
default=False,
):
return []
return [
{
""href"": datasette.urls.path(
""/-/edit-schema/{}/-/create"".format(
database
)
),
""label"": ""Create a table"",
}
]
return inner
Example: datasette-graphql , datasette-edit-schema",736,
113,"homepage_actions(datasette, actor, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) , or to execute SQL queries.
actor - dictionary or None
The currently authenticated actor .
request - Request object
The current HTTP request.
Populates an actions menu on the top-level index homepage of the Datasette instance.
This example adds a link an imagined tool for editing the homepage, only for signed in users:
from datasette import hookimpl
@hookimpl
def homepage_actions(datasette, actor):
if actor:
return [
{
""href"": datasette.urls.path(
""/-/customize-homepage""
),
""label"": ""Customize homepage"",
}
]",736,
114,Template slots,"The following set of plugin hooks can be used to return extra HTML content that will be inserted into the corresponding page, directly below the
heading.
Multiple plugins can contribute content here. The order in which it is displayed can be controlled using Pluggy's call time order options .
Each of these plugin hooks can return either a string or an awaitable function that returns a string.",736,
115,"top_homepage(datasette, request)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
Returns HTML to be displayed at the top of the Datasette homepage.",736,
116,"top_database(datasette, request, database)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
Returns HTML to be displayed at the top of the database page.",736,
117,"top_table(datasette, request, database, table)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
table - string
The name of the table.
Returns HTML to be displayed at the top of the table page.",736,
118,"top_row(datasette, request, database, table, row)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
table - string
The name of the table.
row - sqlite.Row
The SQLite row object being displayed.
Returns HTML to be displayed at the top of the row page.",736,
119,"top_query(datasette, request, database, sql)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
sql - string
The SQL query.
Returns HTML to be displayed at the top of the query results page.",736,
120,"top_canned_query(datasette, request, database, query_name)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
request - Request object
The current HTTP request.
database - string
The name of the database.
query_name - string
The name of the canned query.
Returns HTML to be displayed at the top of the canned query page.",736,
121,Event tracking,"Datasette includes an internal mechanism for tracking notable events. This can be used for analytics, but can also be used by plugins that want to listen out for when key events occur (such as a table being created) and take action in response.
Plugins can register to receive events using the track_event plugin hook.
They can also define their own events for other plugins to receive using the register_events() plugin hook , combined with calls to the datasette.track_event() internal method .",736,
122,"track_event(datasette, event)","datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
event - Event
Information about the event, represented as an instance of a subclass of the Event base class.
This hook will be called any time an event is tracked by code that calls the datasette.track_event(...) internal method.
The event object will always have the following properties:
name : a string representing the name of the event, for example logout or create-table .
actor : a dictionary representing the actor that triggered the event, or None if the event was not triggered by an actor.
created : a datatime.datetime object in the timezone.utc timezone representing the time the event object was created.
Other properties on the event will be available depending on the type of event. You can also access those as a dictionary using event.properties() .
The events fired by Datasette core are documented here .
This example plugin logs details of all events to standard error:
from datasette import hookimpl
import json
import sys
@hookimpl
def track_event(event):
name = event.name
actor = event.actor
properties = event.properties()
msg = json.dumps(
{
""name"": name,
""actor"": actor,
""properties"": properties,
}
)
print(msg, file=sys.stderr, flush=True)
The function can also return an async function which will be awaited. This is useful for writing to a database.
This example logs events to a datasette_events table in a database called events . It uses the startup() hook to create that table if it does not exist.
from datasette import hookimpl
import json
@hookimpl
def startup(datasette):
async def inner():
db = datasette.get_database(""events"")
await db.execute_write(
""""""
create table if not exists datasette_events (
id integer primary key,
event_type text,
created text,
actor text,
properties text
)
""""""
)
return inner
@hookimpl
def track_event(datasette, event):
async def inner():
db = datasette.get_database(""events"")
properties = event.properties()
await db.execute_write(
""""""
insert into datasette_events (event_type, created, actor, properties)
values (?, strftime('%Y-%m-%d %H:%M:%S', 'now'), ?, ?)
"""""",
(event.name, json.dumps(event.actor), json.dumps(properties)),
)
return inner
Example: datasette-events-db",736,
123,register_events(datasette),"datasette - Datasette class
You can use this to access plugin configuration options via datasette.plugin_config(your_plugin_name) .
This hook should return a list of Event subclasses that represent custom events that the plugin might send to the datasette.track_event() method.
This example registers event subclasses for ban-user and unban-user events:
from dataclasses import dataclass
from datasette import hookimpl, Event
@dataclass
class BanUserEvent(Event):
name = ""ban-user""
user: dict
@dataclass
class UnbanUserEvent(Event):
name = ""unban-user""
user: dict
@hookimpl
def register_events():
return [BanUserEvent, UnbanUserEvent]
The plugin can then call datasette.track_event(...) to send a ban-user event:
await datasette.track_event(
BanUserEvent(user={""id"": 1, ""username"": ""cleverbot""})
)",736,
124,Binary data,"SQLite tables can contain binary data in BLOB columns.
Datasette includes special handling for these binary values. The Datasette interface detects binary values and provides a link to download their content, for example on https://latest.datasette.io/fixtures/binary_data
Binary data is represented in .json exports using Base64 encoding.
https://latest.datasette.io/fixtures/binary_data.json?_shape=array
[
{
""rowid"": 1,
""data"": {
""$base64"": true,
""encoded"": ""FRwCx60F/g==""
}
},
{
""rowid"": 2,
""data"": {
""$base64"": true,
""encoded"": ""FRwDx60F/g==""
}
},
{
""rowid"": 3,
""data"": null
}
]",736,
125,Linking to binary downloads,"The .blob output format is used to return binary data. It requires a _blob_column= query string argument specifying which BLOB column should be downloaded, for example:
https://latest.datasette.io/fixtures/binary_data/1.blob?_blob_column=data
This output format can also be used to return binary data from an arbitrary SQL query. Since such queries do not specify an exact row, an additional ?_blob_hash= parameter can be used to specify the SHA-256 hash of the value that is being linked to.
Consider the query select data from binary_data - demonstrated here .
That page links to the binary value downloads. Those links look like this:
https://latest.datasette.io/fixtures.blob?sql=select+data+from+binary_data&_blob_column=data&_blob_hash=f3088978da8f9aea479ffc7f631370b968d2e855eeb172bea7f6c7a04262bb6d
These .blob links are also returned in the .csv exports Datasette provides for binary tables and queries, since the CSV format does not have a mechanism for representing binary data.",736,
126,Binary plugins,"Several Datasette plugins are available that change the way Datasette treats binary data.
datasette-render-binary modifies Datasette's default interface to show an automatic guess at what type of binary data is being stored, along with a visual representation of the binary value that displays ASCII strings directly in the interface.
datasette-render-images detects common image formats and renders them as images directly in the Datasette interface.
datasette-media allows Datasette interfaces to be configured to serve binary files from configured SQL queries, and includes the ability to resize images directly before serving them.",736,
127,Contributing,"Datasette is an open source project. We welcome contributions!
This document describes how to contribute to Datasette core. You can also contribute to the wider Datasette ecosystem by creating new Plugins .",736,
128,General guidelines,"main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released.
The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue.
New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them.",736,
129,Setting up a development environment,"If you have Python 3.8 or higher installed on your computer (on OS X the quickest way to do this is using homebrew ) you can install an editable copy of Datasette using the following steps.
If you want to use GitHub to publish your changes, first create a fork of datasette under your own GitHub account.
Now clone that repository somewhere on your computer:
git clone git@github.com:YOURNAME/datasette
If you want to get started without creating your own fork, you can do this instead:
git clone git@github.com:simonw/datasette
The next step is to create a virtual environment for your project and use it to install Datasette's dependencies:
cd datasette
# Create a virtual environment in ./venv
python3 -m venv ./venv
# Now activate the virtual environment, so pip can install into it
source venv/bin/activate
# Install Datasette and its testing dependencies
python3 -m pip install -e '.[test]'
That last line does most of the work: pip install -e means ""install this package in a way that allows me to edit the source code in place"". The .[test] option means ""use the setup.py in this directory and install the optional testing dependencies as well"".",736,
130,Running the tests,"Once you have done this, you can run the Datasette unit tests from inside your datasette/ directory using pytest like so:
pytest
You can run the tests faster using multiple CPU cores with pytest-xdist like this:
pytest -n auto -m ""not serial""
-n auto detects the number of available cores automatically. The -m ""not serial"" skips tests that don't work well in a parallel test environment. You can run those tests separately like so:
pytest -m ""serial""",736,
131,Using fixtures,"To run Datasette itself, type datasette .
You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests.
You can create a copy of that database by running this command:
python tests/fixtures.py fixtures.db
Now you can run Datasette against the new fixtures database like so:
datasette fixtures.db
This will start a server at http://127.0.0.1:8001/ .
Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript).
If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes:
datasette --reload fixtures.db
You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that:
python tests/fixtures.py fixtures.db fixtures-metadata.json
Or to output the plugins used by the tests, run this:
python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins
Test tables written to fixtures.db
- metadata written to fixtures-metadata.json
Wrote plugin: fixtures-plugins/register_output_renderer.py
Wrote plugin: fixtures-plugins/view_name.py
Wrote plugin: fixtures-plugins/my_plugin.py
Wrote plugin: fixtures-plugins/messages_output_renderer.py
Wrote plugin: fixtures-plugins/my_plugin_2.py
Then run Datasette like this:
datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/",736,
132,Debugging,"Any errors that occur while Datasette is running while display a stack trace on the console.
You can tell Datasette to open an interactive pdb debugger session if an error occurs using the --pdb option:
datasette --pdb fixtures.db",736,
133,Code formatting,"Datasette uses opinionated code formatters: Black for Python and Prettier for JavaScript.
These formatters are enforced by Datasette's continuous integration: if a commit includes Python or JavaScript code that does not match the style enforced by those tools, the tests will fail.
When developing locally, you can verify and correct the formatting of your code using these tools.",736,
134,Running Black,"Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout:
black . --check
All done! ✨ 🍰 ✨
95 files would be left unchanged.
If any of your code does not conform to Black you can run this to automatically fix those problems:
black .
reformatted ../datasette/setup.py
All done! ✨ 🍰 ✨
1 file reformatted, 94 files left unchanged.",736,
135,blacken-docs,"The blacken-docs command applies Black formatting rules to code examples in the documentation. Run it like this:
blacken-docs -l 60 docs/*.rst",736,
136,Prettier,"To install Prettier, install Node.js and then run the following in the root of your datasette repository checkout:
npm install
This will install Prettier in a node_modules directory. You can then check that your code matches the coding style like so:
npm run prettier -- --check
> prettier
> prettier 'datasette/static/*[!.min].js' ""--check""
Checking formatting...
[warn] datasette/static/plugins.js
[warn] Code style issues found in the above file(s). Forgot to run Prettier?
You can fix any problems by running:
npm run fix",736,
137,Editing and building the documentation,"Datasette's documentation lives in the docs/ directory and is deployed automatically using Read The Docs .
The documentation is written using reStructuredText. You may find this article on The subset of reStructuredText worth committing to memory useful.
You can build it locally by installing sphinx and sphinx_rtd_theme in your Datasette development environment and then running make html directly in the docs/ directory:
# You may first need to activate your virtual environment:
source venv/bin/activate
# Install the dependencies needed to build the docs
pip install -e .[docs]
# Now build the docs
cd docs/
make html
This will create the HTML version of the documentation in docs/_build/html . You can open it in your browser like so:
open _build/html/index.html
Any time you make changes to a .rst file you can re-run make html to update the built documents, then refresh them in your browser.
For added productivity, you can use use sphinx-autobuild to run Sphinx in auto-build mode. This will run a local webserver serving the docs that automatically rebuilds them and refreshes the page any time you hit save in your editor.
sphinx-autobuild will have been installed when you ran pip install -e .[docs] . In your docs/ directory you can start the server by running the following:
make livehtml
Now browse to http://localhost:8000/ to view the documentation. Any edits you make should be instantly reflected in your browser.",736,
138,Running Cog,"Some pages of documentation (in particular the CLI reference ) are automatically updated using Cog .
To update these pages, run the following command:
cog -r docs/*.rst",736,
139,Continuously deployed demo instances,"The demo instance at latest.datasette.io is re-deployed automatically to Google Cloud Run for every push to main that passes the test suite. This is implemented by the GitHub Actions workflow at .github/workflows/deploy-latest.yml .
Specific branches can also be set to automatically deploy by adding them to the on: push: branches block at the top of the workflow YAML file. Branches configured in this way will be deployed to a new Cloud Run service whether or not their tests pass.
The Cloud Run URL for a branch demo can be found in the GitHub Actions logs.",736,
140,Release process,"Datasette releases are performed using tags. When a new release is published on GitHub, a GitHub Action workflow will perform the following:
Run the unit tests against all supported Python versions. If the tests pass...
Build a Docker image of the release and push a tag to https://hub.docker.com/r/datasetteproject/datasette
Re-point the ""latest"" tag on Docker Hub to the new image
Build a wheel bundle of the underlying Python source code
Push that new wheel up to PyPI: https://pypi.org/project/datasette/
If the release is an alpha, navigate to https://readthedocs.org/projects/datasette/versions/ and search for the tag name in the ""Activate a version"" filter, then mark that version as ""active"" to ensure it will appear on the public ReadTheDocs documentation site.
To deploy new releases you will need to have push access to the main Datasette GitHub repository.
Datasette follows Semantic Versioning :
major.minor.patch
We increment major for backwards-incompatible releases. Datasette is currently pre-1.0 so the major version is always 0 .
We increment minor for new features.
We increment patch for bugfix releass.
Alpha and beta releases may have an additional a0 or b0 prefix - the integer component will be incremented with each subsequent alpha or beta.
To release a new version, first create a commit that updates the version number in datasette/version.py and the the changelog with highlights of the new version. An example commit can be seen here :
# Update changelog
git commit -m "" Release 0.51a1
Refs #1056, #1039, #998, #1045, #1033, #1036, #1034, #976, #1057, #1058, #1053, #1064, #1066"" -a
git push
Referencing the issues that are part of the release in the commit message ensures the name of the release shows up on those issue pages, e.g. here .
You can generate the list of issue references for a specific release by copying and pasting text from the release notes or GitHub changes-since-last-release view into this Extract issue numbers from pasted text tool.
To create the tag for the release, create a new release on GitHub matching the new version number. You can convert the release notes to Markdown by copying and pasting the rendered HTML into this Paste to Markdown tool .
Finally, post a news item about the release on datasette.io by editing the news.yaml file in that site's repository.",736,
141,Alpha and beta releases,"Alpha and beta releases are published to preview upcoming features that may not yet be stable - in particular to preview new plugin hooks.
You are welcome to try these out, but please be aware that details may change before the final release.
Please join discussions on the issue tracker to share your thoughts and experiences with on alpha and beta features that you try out.",736,
142,Releasing bug fixes from a branch,"If it's necessary to publish a bug fix release without shipping new features that have landed on main a release branch can be used.
Create it from the relevant last tagged release like so:
git branch 0.52.x 0.52.4
git checkout 0.52.x
Next cherry-pick the commits containing the bug fixes:
git cherry-pick COMMIT
Write the release notes in the branch, and update the version number in version.py . Then push the branch:
git push -u origin 0.52.x
Once the tests have completed, publish the release from that branch target using the GitHub Draft a new release form.
Finally, cherry-pick the commit with the release notes and version number bump across to main :
git checkout main
git cherry-pick COMMIT
git push",736,
143,Upgrading CodeMirror,"Datasette bundles CodeMirror for the SQL editing interface, e.g. on this page . Here are the steps for upgrading to a new version of CodeMirror:
Install the packages with:
npm i codemirror @codemirror/lang-sql
Build the bundle using the version number from package.json with:
node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js \
-f iife \
-n cm \
-o datasette/static/cm-editor-6.0.1.bundle.js \
-p @rollup/plugin-node-resolve \
-p @rollup/plugin-terser
Update the version reference in the codemirror.html template.",736,
144,Facets,"Datasette facets can be used to add a faceted browse interface to any database table.
With facets, tables are displayed along with a summary showing the most common values in specified columns.
These values can be selected to further filter the table.
Here's an example :
Facets can be specified in two ways: using query string parameters, or in metadata.json configuration for the table.",736,
145,Facets in query strings,"To turn on faceting for specific columns on a Datasette table view, add one or more _facet=COLUMN parameters to the URL.
For example, if you want to turn on facets for the city_id and state columns, construct a URL that looks like this:
/dbname/tablename?_facet=state&_facet=city_id
This works for both the HTML interface and the .json view.
When enabled, facets will cause a facet_results block to be added to the JSON output, looking something like this:
{
""state"": {
""name"": ""state"",
""results"": [
{
""value"": ""CA"",
""label"": ""CA"",
""count"": 10,
""toggle_url"": ""http://...?_facet=city_id&_facet=state&state=CA"",
""selected"": false
},
{
""value"": ""MI"",
""label"": ""MI"",
""count"": 4,
""toggle_url"": ""http://...?_facet=city_id&_facet=state&state=MI"",
""selected"": false
},
{
""value"": ""MC"",
""label"": ""MC"",
""count"": 1,
""toggle_url"": ""http://...?_facet=city_id&_facet=state&state=MC"",
""selected"": false
}
],
""truncated"": false
}
""city_id"": {
""name"": ""city_id"",
""results"": [
{
""value"": 1,
""label"": ""San Francisco"",
""count"": 6,
""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=1"",
""selected"": false
},
{
""value"": 2,
""label"": ""Los Angeles"",
""count"": 4,
""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=2"",
""selected"": false
},
{
""value"": 3,
""label"": ""Detroit"",
""count"": 4,
""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=3"",
""selected"": false
},
{
""value"": 4,
""label"": ""Memnonia"",
""count"": 1,
""toggle_url"": ""http://...?_facet=city_id&_facet=state&city_id=4"",
""selected"": false
}
],
""truncated"": false
}
}
If Datasette detects that a column is a foreign key, the ""label"" property will be automatically derived from the detected label column on the referenced table.
The default number of facet results returned is 30, controlled by the default_facet_size setting.
You can increase this on an individual page by adding ?_facet_size=100 to the query string, up to a maximum of max_returned_rows (which defaults to 1000).",736,
146,Facets in metadata,"You can turn facets on by default for specific tables by adding them to a ""facets"" key in a Datasette Metadata file.
Here's an example that turns on faceting by default for the qLegalStatus column in the Street_Tree_List table in the sf-trees database:
[[[cog
from metadata_doc import metadata_example
metadata_example(cog, {
""databases"": {
""sf-trees"": {
""tables"": {
""Street_Tree_List"": {
""facets"": [""qLegalStatus""]
}
}
}
}
})
]]]
[[[end]]]
Facets defined in this way will always be shown in the interface and returned in the API, regardless of the _facet arguments passed to the view.
You can specify array or date facets in metadata using JSON objects with a single key of array or date and a value specifying the column, like this:
[[[cog
metadata_example(cog, {
""facets"": [
{""array"": ""tags""},
{""date"": ""created""}
]
})
]]]
[[[end]]]
You can change the default facet size (the number of results shown for each facet) for a table using facet_size :
[[[cog
metadata_example(cog, {
""databases"": {
""sf-trees"": {
""tables"": {
""Street_Tree_List"": {
""facets"": [""qLegalStatus""],
""facet_size"": 10
}
}
}
}
})
]]]
[[[end]]]",736,
147,Suggested facets,"Datasette's table UI will suggest facets for the user to apply, based on the following criteria:
For the currently filtered data are there any columns which, if applied as a facet...
Will return 30 or less unique options
Will return more than one unique option
Will return less unique options than the total number of filtered rows
And the query used to evaluate this criteria can be completed in under 50ms
That last point is particularly important: Datasette runs a query for every column that is displayed on a page, which could get expensive - so to avoid slow load times it sets a time limit of just 50ms for each of those queries.
This means suggested facets are unlikely to appear for tables with millions of records in them.",736,
148,Speeding up facets with indexes,"The performance of facets can be greatly improved by adding indexes on the columns you wish to facet by.
Adding indexes can be performed using the sqlite3 command-line utility. Here's how to add an index on the state column in a table called Food_Trucks :
sqlite3 mydatabase.db
SQLite version 3.19.3 2017-06-27 16:48:08
Enter "".help"" for usage hints.
sqlite> CREATE INDEX Food_Trucks_state ON Food_Trucks(""state"");
Or using the sqlite-utils command-line utility:
sqlite-utils create-index mydatabase.db Food_Trucks state",736,
149,Facet by JSON array,"If your SQLite installation provides the json1 extension (you can check using /-/versions ) Datasette will automatically detect columns that contain JSON arrays of values and offer a faceting interface against those columns.
This is useful for modelling things like tags without needing to break them out into a new table.
Example here: latest.datasette.io/fixtures/facetable?_facet_array=tags",736,
150,Facet by date,"If Datasette finds any columns that contain dates in the first 100 values, it will offer a faceting interface against the dates of those values.
This works especially well against timestamp values such as 2019-03-01 12:44:00 .
Example here: latest.datasette.io/fixtures/facetable?_facet_date=created",736,
151,Events,"Datasette includes a mechanism for tracking events that occur while the software is running. This is primarily intended to be used by plugins, which can both trigger events and listen for events.
The core Datasette application triggers events when certain things happen. This page describes those events.
Plugins can listen for events using the track_event(datasette, event) plugin hook, which will be called with instances of the following classes - or additional classes registered by other plugins .
class datasette.events. LoginEvent actor : dict | None
Event name: login
A user (represented by event.actor ) has logged in.
class datasette.events. LogoutEvent actor : dict | None
Event name: logout
A user (represented by event.actor ) has logged out.
class datasette.events. CreateTokenEvent actor : dict | None expires_after : int | None restrict_all : list restrict_database : dict restrict_resource : dict
Event name: create-token
A user created an API token.
Variables
expires_after -- Number of seconds after which this token will expire.
restrict_all -- Restricted permissions for this token.
restrict_database -- Restricted database permissions for this token.
restrict_resource -- Restricted resource permissions for this token.
class datasette.events. CreateTableEvent actor : dict | None database : str table : str schema : str
Event name: create-table
A new table has been created in the database.
Variables
database -- The name of the database where the table was created.
table -- The name of the table that was created
schema -- The SQL schema definition for the new table.
class datasette.events. DropTableEvent actor : dict | None database : str table : str
Event name: drop-table
A table has been dropped from the database.
Variables
database -- The name of the database where the table was dropped.
table -- The name of the table that was dropped
class datasette.events. AlterTableEvent actor : dict | None database : str table : str before_schema : str after_schema : str
Event name: alter-table
A table has been altered.
Variables
database -- The name of the database where the table was altered
table -- The name of the table that was altered
before_schema -- The table's SQL schema before the alteration
after_schema -- The table's SQL schema after the alteration
class datasette.events. InsertRowsEvent actor : dict | None database : str table : str num_rows : int ignore : bool replace : bool
Event name: insert-rows
Rows were inserted into a table.
Variables
database -- The name of the database where the rows were inserted.
table -- The name of the table where the rows were inserted.
num_rows -- The number of rows that were requested to be inserted.
ignore -- Was ignore set?
replace -- Was replace set?
class datasette.events. UpsertRowsEvent actor : dict | None database : str table : str num_rows : int
Event name: upsert-rows
Rows were upserted into a table.
Variables
database -- The name of the database where the rows were inserted.
table -- The name of the table where the rows were inserted.
num_rows -- The number of rows that were requested to be inserted.
class datasette.events. UpdateRowEvent actor : dict | None database : str table : str pks : list
Event name: update-row
A row was updated in a table.
Variables
database -- The name of the database where the row was updated.
table -- The name of the table where the row was updated.
pks -- The primary key values of the updated row.
class datasette.events. DeleteRowEvent actor : dict | None database : str table : str pks : list
Event name: delete-row
A row was deleted from a table.
Variables
database -- The name of the database where the row was deleted.
table -- The name of the table where the row was deleted.
pks -- The primary key values of the deleted row.",736,
152,JSON API,"Datasette provides a JSON API for your SQLite databases. Anything you can do
through the Datasette user interface can also be accessed as JSON via the API.
To access the API for a page, either click on the .json link on that page or
edit the URL and add a .json extension to it.",736,
153,Default representation,"The default JSON representation of data from a SQLite table or custom query
looks like this:
{
""ok"": true,
""rows"": [
{
""id"": 3,
""name"": ""Detroit""
},
{
""id"": 2,
""name"": ""Los Angeles""
},
{
""id"": 4,
""name"": ""Memnonia""
},
{
""id"": 1,
""name"": ""San Francisco""
}
],
""truncated"": false
}
""ok"" is always true if an error did not occur.
The ""rows"" key is a list of objects, each one representing a row.
The ""truncated"" key lets you know if the query was truncated. This can happen if a SQL query returns more than 1,000 results (or the max_returned_rows setting).
For table pages, an additional key ""next"" may be present. This indicates that the next page in the pagination set can be retrieved using ?_next=VALUE .",736,
154,Different shapes,"The _shape parameter can be used to access alternative formats for the
rows key which may be more convenient for your application. There are three
options:
?_shape=objects - ""rows"" is a list of JSON key/value objects - the default
?_shape=arrays - ""rows"" is a list of lists, where the order of values in each list matches the order of the columns
?_shape=array - a JSON array of objects - effectively just the ""rows"" key from the default representation
?_shape=array&_nl=on - a newline-separated list of JSON objects
?_shape=arrayfirst - a flat JSON array containing just the first value from each row
?_shape=object - a JSON object keyed using the primary keys of the rows
_shape=arrays looks like this:
{
""ok"": true,
""next"": null,
""rows"": [
[3, ""Detroit""],
[2, ""Los Angeles""],
[4, ""Memnonia""],
[1, ""San Francisco""]
]
}
_shape=array looks like this:
[
{
""id"": 3,
""name"": ""Detroit""
},
{
""id"": 2,
""name"": ""Los Angeles""
},
{
""id"": 4,
""name"": ""Memnonia""
},
{
""id"": 1,
""name"": ""San Francisco""
}
]
_shape=array&_nl=on looks like this:
{""id"": 1, ""value"": ""Myoporum laetum :: Myoporum""}
{""id"": 2, ""value"": ""Metrosideros excelsa :: New Zealand Xmas Tree""}
{""id"": 3, ""value"": ""Pinus radiata :: Monterey Pine""}
_shape=arrayfirst looks like this:
[1, 2, 3]
_shape=object looks like this:
{
""1"": {
""id"": 1,
""value"": ""Myoporum laetum :: Myoporum""
},
""2"": {
""id"": 2,
""value"": ""Metrosideros excelsa :: New Zealand Xmas Tree""
},
""3"": {
""id"": 3,
""value"": ""Pinus radiata :: Monterey Pine""
}
]
The object shape is only available for queries against tables - custom SQL
queries and views do not have an obvious primary key so cannot be returned using
this format.
The object keys are always strings. If your table has a compound primary
key, the object keys will be a comma-separated string.",736,
155,Pagination,"The default JSON representation includes a ""next_url"" key which can be used to access the next page of results. If that key is null or missing then it means you have reached the final page of results.
Other representations include pagination information in the link HTTP header. That header will look something like this:
link: ; rel=""next""
Here is an example Python function built using requests that returns a list of all of the paginated items from one of these API endpoints:
def paginate(url):
items = []
while url:
response = requests.get(url)
try:
url = response.links.get(""next"").get(""url"")
except AttributeError:
url = None
items.extend(response.json())
return items",736,
156,Special JSON arguments,"Every Datasette endpoint that can return JSON also accepts the following
query string arguments:
?_shape=SHAPE
The shape of the JSON to return, documented above.
?_nl=on
When used with ?_shape=array produces newline-delimited JSON objects.
?_json=COLUMN1&_json=COLUMN2
If any of your SQLite columns contain JSON values, you can use one or more
_json= parameters to request that those columns be returned as regular
JSON. Without this argument those columns will be returned as JSON objects
that have been double-encoded into a JSON string value.
Compare this query without the argument to this query using the argument
?_json_infinity=on
If your data contains infinity or -infinity values, Datasette will replace
them with None when returning them as JSON. If you pass _json_infinity=1
Datasette will instead return them as Infinity or -Infinity which is
invalid JSON but can be processed by some custom JSON parsers.
?_timelimit=MS
Sets a custom time limit for the query in ms. You can use this for optimistic
queries where you would like Datasette to give up if the query takes too
long, for example if you want to implement autocomplete search but only if
it can be executed in less than 10ms.
?_ttl=SECONDS
For how many seconds should this response be cached by HTTP proxies? Use
?_ttl=0 to disable HTTP caching entirely for this request.
?_trace=1
Turns on tracing for this page: SQL queries executed during the request will
be gathered and included in the response, either in a new ""_traces"" key
for JSON responses or at the bottom of the page if the response is in HTML.
The structure of the data returned here should be considered highly unstable
and very likely to change.
Only available if the trace_debug setting is enabled.",736,
157,Table arguments,The Datasette table view takes a number of special query string arguments.,736,
158,Column filter arguments,"You can filter the data returned by the table based on column values using a query string argument.
?column__exact=value or ?_column=value
Returns rows where the specified column exactly matches the value.
?column__not=value
Returns rows where the column does not match the value.
?column__contains=value
Rows where the string column contains the specified value ( column like ""%value%"" in SQL).
?column__notcontains=value
Rows where the string column does not contain the specified value ( column not like ""%value%"" in SQL).
?column__endswith=value
Rows where the string column ends with the specified value ( column like ""%value"" in SQL).
?column__startswith=value
Rows where the string column starts with the specified value ( column like ""value%"" in SQL).
?column__gt=value
Rows which are greater than the specified value.
?column__gte=value
Rows which are greater than or equal to the specified value.
?column__lt=value
Rows which are less than the specified value.
?column__lte=value
Rows which are less than or equal to the specified value.
?column__like=value
Match rows with a LIKE clause, case insensitive and with % as the wildcard character.
?column__notlike=value
Match rows that do not match the provided LIKE clause.
?column__glob=value
Similar to LIKE but uses Unix wildcard syntax and is case sensitive.
?column__in=value1,value2,value3
Rows where column matches any of the provided values.
You can use a comma separated string, or you can use a JSON array.
The JSON array option is useful if one of your matching values itself contains a comma:
?column__in=[""value"",""value,with,commas""]
?column__notin=value1,value2,value3
Rows where column does not match any of the provided values. The inverse of __in= . Also supports JSON arrays.
?column__arraycontains=value
Works against columns that contain JSON arrays - matches if any of the values in that array match the provided value.
This is only available if the json1 SQLite extension is enabled.
?column__arraynotcontains=value
Works against columns that contain JSON arrays - matches if none of the values in that array match the provided value.
This is only available if the json1 SQLite extension is enabled.
?column__date=value
Column is a datestamp occurring on the specified YYYY-MM-DD date, e.g. 2018-01-02 .
?column__isnull=1
Matches rows where the column is null.
?column__notnull=1
Matches rows where the column is not null.
?column__isblank=1
Matches rows where the column is blank, meaning null or the empty string.
?column__notblank=1
Matches rows where the column is not blank.",736,
159,Special table arguments,"?_col=COLUMN1&_col=COLUMN2
List specific columns to display. These will be shown along with any primary keys.
?_nocol=COLUMN1&_nocol=COLUMN2
List specific columns to hide - any column not listed will be displayed. Primary keys cannot be hidden.
?_labels=on/off
Expand foreign key references for every possible column. See below.
?_label=COLUMN1&_label=COLUMN2
Expand foreign key references for one or more specified columns.
?_size=1000 or ?_size=max
Sets a custom page size. This cannot exceed the max_returned_rows limit
passed to datasette serve . Use max to get max_returned_rows .
?_sort=COLUMN
Sorts the results by the specified column.
?_sort_desc=COLUMN
Sorts the results by the specified column in descending order.
?_search=keywords
For SQLite tables that have been configured for
full-text search executes a search
with the provided keywords.
?_search_COLUMN=keywords
Like _search= but allows you to specify the column to be searched, as
opposed to searching all columns that have been indexed by FTS.
?_searchmode=raw
With this option, queries passed to ?_search= or ?_search_COLUMN= will
not have special characters escaped. This means you can make use of the full
set of advanced SQLite FTS syntax ,
though this could potentially result in errors if the wrong syntax is used.
?_where=SQL-fragment
If the execute-sql permission is enabled, this parameter
can be used to pass one or more additional SQL fragments to be used in the
WHERE clause of the SQL used to query the table.
This is particularly useful if you are building a JavaScript application
that needs to do something creative but still wants the other conveniences
provided by the table view (such as faceting) and hence would like not to
have to construct a completely custom SQL query.
Some examples:
facetable?_where=_neighborhood like ""%c%""&_where=_city_id=3
facetable?_where=_city_id in (select id from facet_cities where name != ""Detroit"")
?_through={json}
This can be used to filter rows via a join against another table.
The JSON parameter must include three keys: table , column and value .
table must be a table that the current table is related to via a foreign key relationship.
column must be a column in that other table.
value is the value that you want to match against.
For example, to filter roadside_attractions to just show the attractions that have a characteristic of ""museum"", you would construct this JSON:
{
""table"": ""roadside_attraction_characteristics"",
""column"": ""characteristic_id"",
""value"": ""1""
}
As a URL, that looks like this:
?_through={%22table%22:%22roadside_attraction_characteristics%22,%22column%22:%22characteristic_id%22,%22value%22:%221%22}
Here's an example .
?_next=TOKEN
Pagination by continuation token - pass the token that was returned in the
""next"" property by the previous page.
?_facet=column
Facet by column. Can be applied multiple times, see Facets . Only works on the default JSON output, not on any of the custom shapes.
?_facet_size=100
Increase the number of facet results returned for each facet. Use ?_facet_size=max for the maximum available size, determined by max_returned_rows .
?_nofacet=1
Disable all facets and facet suggestions for this page, including any defined by Facets in metadata .
?_nosuggest=1
Disable facet suggestions for this page.
?_nocount=1
Disable the select count(*) query used on this page - a count of None will be returned instead.",736,
160,Expanding foreign key references,"Datasette can detect foreign key relationships and resolve those references into
labels. The HTML interface does this by default for every detected foreign key
column - you can turn that off using ?_labels=off .
You can request foreign keys be expanded in JSON using the _labels=on or
_label=COLUMN special query string parameters. Here's what an expanded row
looks like:
[
{
""rowid"": 1,
""TreeID"": 141565,
""qLegalStatus"": {
""value"": 1,
""label"": ""Permitted Site""
},
""qSpecies"": {
""value"": 1,
""label"": ""Myoporum laetum :: Myoporum""
},
""qAddress"": ""501X Baker St"",
""SiteOrder"": 1
}
]
The column in the foreign key table that is used for the label can be specified
in metadata.json - see Specifying the label column for a table .",736,
161,Discovering the JSON for a page,"Most of the HTML pages served by Datasette provide a mechanism for discovering their JSON equivalents using the HTML link mechanism.
You can find this near the top of the source code of those pages, looking like this:
The JSON URL is also made available in a Link HTTP header for the page:
Link: https://latest.datasette.io/fixtures/sortable.json; rel=""alternate""; type=""application/json+datasette""",736,
162,Enabling CORS,"If you start Datasette with the --cors option, each JSON endpoint will be
served with the following additional HTTP headers:
[[[cog
from datasette.utils import add_cors_headers
import textwrap
headers = {}
add_cors_headers(headers)
output = ""\n"".join(""{}: {}"".format(k, v) for k, v in headers.items())
cog.out(""\n::\n\n"")
cog.out(textwrap.indent(output, ' '))
cog.out(""\n\n"")
]]]
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: Authorization, Content-Type
Access-Control-Expose-Headers: Link
Access-Control-Allow-Methods: GET, POST, HEAD, OPTIONS
Access-Control-Max-Age: 3600
[[[end]]]
This allows JavaScript running on any domain to make cross-origin
requests to interact with the Datasette API.
If you start Datasette without the --cors option only JavaScript running on
the same domain as Datasette will be able to access the API.
Here's how to serve data.db with CORS enabled:
datasette data.db --cors",736,
163,The JSON write API,"Datasette provides a write API for JSON data. This is a POST-only API that requires an authenticated API token, see API Tokens . The token will need to have the specified Permissions .",736,
164,Inserting rows,"This requires the insert-row permission.
A single row can be inserted using the ""row"" key:
POST //
/-/insert
Content-Type: application/json
Authorization: Bearer dstok_
{
""row"": {
""column1"": ""value1"",
""column2"": ""value2""
}
}
If successful, this will return a 201 status code and the newly inserted row, for example:
{
""rows"": [
{
""id"": 1,
""column1"": ""value1"",
""column2"": ""value2""
}
]
}
To insert multiple rows at a time, use the same API method but send a list of dictionaries as the ""rows"" key:
POST //
/-/insert
Content-Type: application/json
Authorization: Bearer dstok_
{
""rows"": [
{
""column1"": ""value1"",
""column2"": ""value2""
},
{
""column1"": ""value3"",
""column2"": ""value4""
}
]
}
If successful, this will return a 201 status code and a {""ok"": true} response body.
The maximum number rows that can be submitted at once defaults to 100, but this can be changed using the max_insert_rows setting.
To return the newly inserted rows, add the ""return"": true key to the request body:
{
""rows"": [
{
""column1"": ""value1"",
""column2"": ""value2""
},
{
""column1"": ""value3"",
""column2"": ""value4""
}
],
""return"": true
}
This will return the same ""rows"" key as the single row example above. There is a small performance penalty for using this option.
If any of your rows have a primary key that is already in use, you will get an error and none of the rows will be inserted:
{
""ok"": false,
""errors"": [
""UNIQUE constraint failed: new_table.id""
]
}
Pass ""ignore"": true to ignore these errors and insert the other rows:
{
""rows"": [
{
""id"": 1,
""column1"": ""value1"",
""column2"": ""value2""
},
{
""id"": 2,
""column1"": ""value3"",
""column2"": ""value4""
}
],
""ignore"": true
}
Or you can pass ""replace"": true to replace any rows with conflicting primary keys with the new values. This requires the update-row permission.
Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.",736,
165,Upserting rows,"An upsert is an insert or update operation. If a row with a matching primary key already exists it will be updated - otherwise a new row will be inserted.
The upsert API is mostly the same shape as the insert API . It requires both the insert-row and update-row permissions.
POST //
/-/upsert
Content-Type: application/json
Authorization: Bearer dstok_
{
""rows"": [
{
""id"": 1,
""title"": ""Updated title for 1"",
""description"": ""Updated description for 1""
},
{
""id"": 2,
""description"": ""Updated description for 2"",
},
{
""id"": 3,
""title"": ""Item 3"",
""description"": ""Description for 3""
}
]
}
Imagine a table with a primary key of id and which already has rows with id values of 1 and 2 .
The above example will:
Update the row with id of 1 to set both title and description to the new values
Update the row with id of 2 to set title to the new value - description will be left unchanged
Insert a new row with id of 3 and both title and description set to the new values
Similar to /-/insert , a row key with an object can be used instead of a rows array to upsert a single row.
If successful, this will return a 200 status code and a {""ok"": true} response body.
Add ""return"": true to the request body to return full copies of the affected rows after they have been inserted or updated:
{
""rows"": [
{
""id"": 1,
""title"": ""Updated title for 1"",
""description"": ""Updated description for 1""
},
{
""id"": 2,
""description"": ""Updated description for 2"",
},
{
""id"": 3,
""title"": ""Item 3"",
""description"": ""Description for 3""
}
],
""return"": true
}
This will return the following:
{
""ok"": true,
""rows"": [
{
""id"": 1,
""title"": ""Updated title for 1"",
""description"": ""Updated description for 1""
},
{
""id"": 2,
""title"": ""Item 2"",
""description"": ""Updated description for 2""
},
{
""id"": 3,
""title"": ""Item 3"",
""description"": ""Description for 3""
}
]
}
When using upsert you must provide the primary key column (or columns if the table has a compound primary key) for every row, or you will get a 400 error:
{
""ok"": false,
""errors"": [
""Row 0 is missing primary key column(s): \""id\""""
]
}
If your table does not have an explicit primary key you should pass the SQLite rowid key instead.
Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.",736,
166,Updating a row,"To update a row, make a POST to //
//-/update . This requires the update-row permission.
POST //
//-/update
Content-Type: application/json
Authorization: Bearer dstok_
{
""update"": {
""text_column"": ""New text string"",
""integer_column"": 3,
""float_column"": 3.14
}
}
here is the tilde-encoded primary key value of the row to update - or a comma-separated list of primary key values if the table has a composite primary key.
You only need to pass the columns you want to update. Any other columns will be left unchanged.
If successful, this will return a 200 status code and a {""ok"": true} response body.
Add ""return"": true to the request body to return the updated row:
{
""update"": {
""title"": ""New title""
},
""return"": true
}
The returned JSON will look like this:
{
""ok"": true,
""row"": {
""id"": 1,
""title"": ""New title"",
""other_column"": ""Will be present here too""
}
}
Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.
Pass ""alter: true to automatically add any missing columns to the table. This requires the alter-table permission.",736,
167,Deleting a row,"To delete a row, make a POST to //
//-/delete . This requires the delete-row permission.
POST //
//-/delete
Content-Type: application/json
Authorization: Bearer dstok_ here is the tilde-encoded primary key value of the row to delete - or a comma-separated list of primary key values if the table has a composite primary key.
If successful, this will return a 200 status code and a {""ok"": true} response body.
Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.",736,
168,Creating a table,"To create a table, make a POST to //-/create . This requires the create-table permission.
POST //-/create
Content-Type: application/json
Authorization: Bearer dstok_
{
""table"": ""name_of_new_table"",
""columns"": [
{
""name"": ""id"",
""type"": ""integer""
},
{
""name"": ""title"",
""type"": ""text""
}
],
""pk"": ""id""
}
The JSON here describes the table that will be created:
table is the name of the table to create. This field is required.
columns is a list of columns to create. Each column is a dictionary with name and type keys.
name is the name of the column. This is required.
type is the type of the column. This is optional - if not provided, text will be assumed. The valid types are text , integer , float and blob .
pk is the primary key for the table. This is optional - if not provided, Datasette will create a SQLite table with a hidden rowid column.
If the primary key is an integer column, it will be configured to automatically increment for each new record.
If you set this to id without including an id column in the list of columns , Datasette will create an auto-incrementing integer ID column for you.
pks can be used instead of pk to create a compound primary key. It should be a JSON list of column names to use in that primary key.
ignore can be set to true to ignore existing rows by primary key if the table already exists.
replace can be set to true to replace existing rows by primary key if the table already exists. This requires the update-row permission.
alter can be set to true if you want to automatically add any missing columns to the table. This requires the alter-table permission.
If the table is successfully created this will return a 201 status code and the following response:
{
""ok"": true,
""database"": ""data"",
""table"": ""name_of_new_table"",
""table_url"": ""http://127.0.0.1:8001/data/name_of_new_table"",
""table_api_url"": ""http://127.0.0.1:8001/data/name_of_new_table.json"",
""schema"": ""CREATE TABLE [name_of_new_table] (\n [id] INTEGER PRIMARY KEY,\n [title] TEXT\n)""
}",736,
169,Creating a table from example data,"Instead of specifying columns directly you can instead pass a single example row or a list of rows .
Datasette will create a table with a schema that matches those rows and insert them for you:
POST //-/create
Content-Type: application/json
Authorization: Bearer dstok_
{
""table"": ""creatures"",
""rows"": [
{
""id"": 1,
""name"": ""Tarantula""
},
{
""id"": 2,
""name"": ""Kākāpō""
}
],
""pk"": ""id""
}
Doing this requires both the create-table and insert-row permissions.
The 201 response here will be similar to the columns form, but will also include the number of rows that were inserted as row_count :
{
""ok"": true,
""database"": ""data"",
""table"": ""creatures"",
""table_url"": ""http://127.0.0.1:8001/data/creatures"",
""table_api_url"": ""http://127.0.0.1:8001/data/creatures.json"",
""schema"": ""CREATE TABLE [creatures] (\n [id] INTEGER PRIMARY KEY,\n [name] TEXT\n)"",
""row_count"": 2
}
You can call the create endpoint multiple times for the same table provided you are specifying the table using the rows or row option. New rows will be inserted into the table each time. This means you can use this API if you are unsure if the relevant table has been created yet.
If you pass a row to the create endpoint with a primary key that already exists you will get an error that looks like this:
{
""ok"": false,
""errors"": [
""UNIQUE constraint failed: creatures.id""
]
}
You can avoid this error by passing the same ""ignore"": true or ""replace"": true options to the create endpoint as you can to the insert endpoint .
To use the ""replace"": true option you will also need the update-row permission.
Pass ""alter"": true to automatically add any missing columns to the existing table that are present in the rows you are submitting. This requires the alter-table permission.",736,
170,Dropping tables,"To drop a table, make a POST to //
/-/drop . This requires the drop-table permission.
POST //
/-/drop
Content-Type: application/json
Authorization: Bearer dstok_
Without a POST body this will return a status 200 with a note about how many rows will be deleted:
{
""ok"": true,
""database"": """",
""table"": ""
"",
""row_count"": 5,
""message"": ""Pass \""confirm\"": true to confirm""
}
If you pass the following POST body:
{
""confirm"": true
}
Then the table will be dropped and a status 200 response of {""ok"": true} will be returned.
Any errors will return {""errors"": [""... descriptive message ...""], ""ok"": false} , and a 400 status code for a bad input or a 403 status code for an authentication or permission error.",736,
171,Deploying Datasette,"The quickest way to deploy a Datasette instance on the internet is to use the datasette publish command, described in Publishing data . This can be used to quickly deploy Datasette to a number of hosting providers including Heroku, Google Cloud Run and Vercel.
You can deploy Datasette to other hosting providers using the instructions on this page.",736,
172,Deployment fundamentals,"Datasette can be deployed as a single datasette process that listens on a port. Datasette is not designed to be run as root, so that process should listen on a higher port such as port 8000.
If you want to serve Datasette on port 80 (the HTTP default port) or port 443 (for HTTPS) you should run it behind a proxy server, such as nginx, Apache or HAProxy. The proxy server can listen on port 80/443 and forward traffic on to Datasette.",736,
173,Running Datasette using systemd,"You can run Datasette on Ubuntu or Debian systems using systemd .
First, ensure you have Python 3 and pip installed. On Ubuntu you can use sudo apt-get install python3 python3-pip .
You can install Datasette into a virtual environment, or you can install it system-wide. To install system-wide, use sudo pip3 install datasette .
Now create a folder for your Datasette databases, for example using mkdir /home/ubuntu/datasette-root .
You can copy a test database into that folder like so:
cd /home/ubuntu/datasette-root
curl -O https://latest.datasette.io/fixtures.db
Create a file at /etc/systemd/system/datasette.service with the following contents:
[Unit]
Description=Datasette
After=network.target
[Service]
Type=simple
User=ubuntu
Environment=DATASETTE_SECRET=
WorkingDirectory=/home/ubuntu/datasette-root
ExecStart=datasette serve . -h 127.0.0.1 -p 8000
Restart=on-failure
[Install]
WantedBy=multi-user.target
Add a random value for the DATASETTE_SECRET - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so:
python3 -c 'import secrets; print(secrets.token_hex(32))'
This configuration will run Datasette against all database files contained in the /home/ubuntu/datasette-root directory. If that directory contains a metadata.yml (or .json ) file or a templates/ or plugins/ sub-directory those will automatically be loaded by Datasette - see Configuration directory mode for details.
You can start the Datasette process running using the following:
sudo systemctl daemon-reload
sudo systemctl start datasette.service
You will need to restart the Datasette service after making changes to its metadata.json configuration or adding a new database file to that directory. You can do that using:
sudo systemctl restart datasette.service
Once the service has started you can confirm that Datasette is running on port 8000 like so:
curl 127.0.0.1:8000/-/versions.json
# Should output JSON showing the installed version
Datasette will not be accessible from outside the server because it is listening on 127.0.0.1 . You can expose it by instead listening on 0.0.0.0 , but a better way is to set up a proxy such as nginx - see Running Datasette behind a proxy .",736,
174,Running Datasette using OpenRC,"OpenRC is the service manager on non-systemd Linux distributions like Alpine Linux and Gentoo .
Create an init script at /etc/init.d/datasette with the following contents:
#!/sbin/openrc-run
name=""datasette""
command=""datasette""
command_args=""serve -h 0.0.0.0 /path/to/db.db""
command_background=true
pidfile=""/run/${RC_SVCNAME}.pid""
You then need to configure the service to run at boot and start it:
rc-update add datasette
rc-service datasette start",736,
175,Deploying using buildpacks,"Some hosting providers such as Heroku , DigitalOcean App Platform and Scalingo support the Buildpacks standard for deploying Python web applications.
Deploying Datasette on these platforms requires two files: requirements.txt and Procfile .
The requirements.txt file lets the platform know which Python packages should be installed. It should contain datasette at a minimum, but can also list any Datasette plugins you wish to install - for example:
datasette
datasette-vega
The Procfile lets the hosting platform know how to run the command that serves web traffic. It should look like this:
web: datasette . -h 0.0.0.0 -p $PORT --cors
The $PORT environment variable is provided by the hosting platform. --cors enables CORS requests from JavaScript running on other websites to your domain - omit this if you don't want to allow CORS. You can add additional Datasette Settings options here too.
These two files should be enough to deploy Datasette on any host that supports buildpacks. Datasette will serve any SQLite files that are included in the root directory of the application.
If you want to build SQLite files or download them as part of the deployment process you can do so using a bin/post_compile file. For example, the following bin/post_compile will download an example database that will then be served by Datasette:
wget https://fivethirtyeight.datasettes.com/fivethirtyeight.db
simonw/buildpack-datasette-demo is an example GitHub repository showing a Datasette configuration that can be deployed to a buildpack-supporting host.",736,
176,Running Datasette behind a proxy,"You may wish to run Datasette behind an Apache or nginx proxy, using a path within your existing site.
You can use the base_url configuration setting to tell Datasette to serve traffic with a specific URL prefix. For example, you could run Datasette like this:
datasette my-database.db --setting base_url /my-datasette/ -p 8009
This will run Datasette with the following URLs:
http://127.0.0.1:8009/my-datasette/ - the Datasette homepage
http://127.0.0.1:8009/my-datasette/my-database - the page for the my-database.db database
http://127.0.0.1:8009/my-datasette/my-database/some_table - the page for the some_table table
You can now set your nginx or Apache server to proxy the /my-datasette/ path to this Datasette instance.",736,
177,Nginx proxy configuration,"Here is an example of an nginx configuration file that will proxy traffic to Datasette:
daemon off;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location /my-datasette {
proxy_pass http://127.0.0.1:8009/my-datasette;
proxy_set_header Host $host;
}
}
}
You can also use the --uds option to Datasette to listen on a Unix domain socket instead of a port, configuring the nginx upstream proxy like this:
daemon off;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location /my-datasette {
proxy_pass http://datasette/my-datasette;
proxy_set_header Host $host;
}
}
upstream datasette {
server unix:/tmp/datasette.sock;
}
}
Then run Datasette with datasette --uds /tmp/datasette.sock path/to/database.db --setting base_url /my-datasette/ .",736,
178,Apache proxy configuration,"For Apache , you can use the ProxyPass directive. First make sure the following lines are uncommented:
LoadModule proxy_module lib/httpd/modules/mod_proxy.so
LoadModule proxy_http_module lib/httpd/modules/mod_proxy_http.so
Then add these directives to proxy traffic:
ProxyPass /my-datasette/ http://127.0.0.1:8009/my-datasette/
ProxyPreserveHost On
A live demo of Datasette running behind Apache using this proxy setup can be seen at datasette-apache-proxy-demo.datasette.io/prefix/ . The code for that demo can be found in the demos/apache-proxy directory.
Using --uds you can use Unix domain sockets similar to the nginx example:
ProxyPass /my-datasette/ unix:/tmp/datasette.sock|http://localhost/my-datasette/
The ProxyPreserveHost On directive ensures that the original Host: header from the incoming request is passed through to Datasette. Datasette needs this to correctly assemble links to other pages using the .absolute_url(request, path) method.",736,
179,Custom pages and templates,Datasette provides a number of ways of customizing the way data is displayed.,736,
180,CSS classes on the ,"Every default template includes CSS classes in the body designed to support
custom styling.
The index template (the top level page at / ) gets this:
The database template ( /dbname ) gets this:
The custom SQL template ( /dbname?sql=... ) gets this:
A canned query template ( /dbname/queryname ) gets this:
The table template ( /dbname/tablename ) gets:
The row template ( /dbname/tablename/rowid ) gets:
The db-x and table-x classes use the database or table names themselves if
they are valid CSS identifiers. If they aren't, we strip any invalid
characters out and append a 6 character md5 digest of the original name, in
order to ensure that multiple tables which resolve to the same stripped
character version still have different CSS classes.
Some examples:
""simple"" => ""simple""
""MixedCase"" => ""MixedCase""
""-no-leading-hyphens"" => ""no-leading-hyphens-65bea6""
""_no-leading-underscores"" => ""no-leading-underscores-b921bc""
""no spaces"" => ""no-spaces-7088d7""
""-"" => ""336d5e""
""no $ characters"" => ""no--characters-59e024""
and
elements also get custom CSS classes reflecting the
database column they are representing, for example:
",736,
181,Serving static files,"Datasette can serve static files for you, using the --static option.
Consider the following directory structure:
metadata.json
static-files/styles.css
static-files/app.js
You can start Datasette using --static assets:static-files/ to serve those
files from the /assets/ mount point:
datasette --config datasette.yaml --static assets:static-files/ --memory
The following URLs will now serve the content from those CSS and JS files:
http://localhost:8001/assets/styles.css
http://localhost:8001/assets/app.js
You can reference those files from datasette.yaml like this, see custom CSS and JavaScript for more details:
[[[cog
from metadata_doc import config_example
config_example(cog, """"""
extra_css_urls:
- /assets/styles.css
extra_js_urls:
- /assets/app.js
"""""")
]]]
[[[end]]]",736,
182,Publishing static assets,"The datasette publish command can be used to publish your static assets,
using the same syntax as above:
datasette publish cloudrun mydb.db --static assets:static-files/
This will upload the contents of the static-files/ directory as part of the
deployment, and configure Datasette to correctly serve the assets from /assets/ .",736,
183,Custom templates,"By default, Datasette uses default templates that ship with the package.
You can over-ride these templates by specifying a custom --template-dir like
this:
datasette mydb.db --template-dir=mytemplates/
Datasette will now first look for templates in that directory, and fall back on
the defaults if no matches are found.
It is also possible to over-ride templates on a per-database, per-row or per-
table basis.
The lookup rules Datasette uses are as follows:
Index page (/):
index.html
Database page (/mydatabase):
database-mydatabase.html
database.html
Custom query page (/mydatabase?sql=...):
query-mydatabase.html
query.html
Canned query page (/mydatabase/canned-query):
query-mydatabase-canned-query.html
query-mydatabase.html
query.html
Table page (/mydatabase/mytable):
table-mydatabase-mytable.html
table.html
Row page (/mydatabase/mytable/id):
row-mydatabase-mytable.html
row.html
Table of rows and columns include on table page:
_table-table-mydatabase-mytable.html
_table-mydatabase-mytable.html
_table.html
Table of rows and columns include on row page:
_table-row-mydatabase-mytable.html
_table-mydatabase-mytable.html
_table.html
If a table name has spaces or other unexpected characters in it, the template
filename will follow the same rules as our custom CSS classes - for
example, a table called ""Food Trucks"" will attempt to load the following
templates:
table-mydatabase-Food-Trucks-399138.html
table.html
You can find out which templates were considered for a specific page by viewing
source on that page and looking for an HTML comment at the bottom. The comment
will look something like this:
This example is from the canned query page for a query called ""tz"" in the
database called ""mydb"". The asterisk shows which template was selected - so in
this case, Datasette found a template file called query-mydb-tz.html and
used that - but if that template had not been found, it would have tried for
query-mydb.html or the default query.html .
It is possible to extend the default templates using Jinja template
inheritance. If you want to customize EVERY row template with some additional
content you can do so by creating a row.html template like this:
{% extends ""default:row.html"" %}
{% block content %}
EXTRA HTML AT THE TOP OF THE CONTENT BLOCK
This line renders the original block:
{{ super() }}
{% endblock %}
Note the default:row.html template name, which ensures Jinja will inherit
from the default template.
The _table.html template is included by both the row and the table pages,
and a list of rows. The default _table.html template renders them as an
HTML template and can be seen here .
You can provide a custom template that applies to all of your databases and
tables, or you can provide custom templates for specific tables using the
template naming scheme described above.
If you want to present your data in a format other than an HTML table, you
can do so by looping through display_rows in your own _table.html
template. You can use {{ row[""column_name""] }} to output the raw value
of a specific column.
If you want to output the rendered HTML version of a column, including any
links to foreign keys, you can use {{ row.display(""column_name"") }} .
Here is an example of a custom _table.html template:
{% for row in display_rows %}
{{ row[""title""] }}
{{ row[""description""] }}
Category: {{ row.display(""category_id"") }}
{% endfor %}",736,
184,Custom pages,"You can add templated pages to your Datasette instance by creating HTML files in a pages directory within your templates directory.
For example, to add a custom page that is served at http://localhost/about you would create a file in templates/pages/about.html , then start Datasette like this:
datasette mydb.db --template-dir=templates/
You can nest directories within pages to create a nested structure. To create a http://localhost:8001/about/map page you would create templates/pages/about/map.html .",736,
185,Path parameters for pages,"You can define custom pages that match multiple paths by creating files with {variable} definitions in their filenames.
For example, to capture any request to a URL matching /about/* , you would create a template in the following location:
templates/pages/about/{slug}.html
A hit to /about/news would render that template and pass in a variable called slug with a value of ""news"" .
If you use this mechanism don't forget to return a 404 if the referenced content could not be found. You can do this using {{ raise_404() }} described below.
Templates defined using custom page routes work particularly well with the sql() template function from datasette-template-sql or the graphql() template function from datasette-graphql .",736,
186,Custom headers and status codes,"Custom pages default to being served with a content-type of text/html; charset=utf-8 and a 200 status code. You can change these by calling a custom function from within your template.
For example, to serve a custom page with a 418 I'm a teapot HTTP status code, create a file in pages/teapot.html containing the following:
{{ custom_status(418) }}
Teapot
I'm a teapot
To serve a custom HTTP header, add a custom_header(name, value) function call. For example:
{{ custom_status(418) }}
{{ custom_header(""x-teapot"", ""I am"") }}
Teapot
I'm a teapot
You can verify this is working using curl like this:
curl -I 'http://127.0.0.1:8001/teapot'
HTTP/1.1 418
date: Sun, 26 Apr 2020 18:38:30 GMT
server: uvicorn
x-teapot: I am
content-type: text/html; charset=utf-8",736,
187,Returning 404s,"To indicate that content could not be found and display the default 404 page you can use the raise_404(message) function:
{% if not rows %}
{{ raise_404(""Content not found"") }}
{% endif %}
If you call raise_404() the other content in your template will be ignored.",736,
188,Custom redirects,"You can use the custom_redirect(location) function to redirect users to another page, for example in a file called pages/datasette.html :
{{ custom_redirect(""https://github.com/simonw/datasette"") }}
Now requests to http://localhost:8001/datasette will result in a redirect.
These redirects are served with a 302 Found status code by default. You can send a 301 Moved Permanently code by passing 301 as the second argument to the function:
{{ custom_redirect(""https://github.com/simonw/datasette"", 301) }}",736,
189,Custom error pages,"Datasette returns an error page if an unexpected error occurs, access is forbidden or content cannot be found.
You can customize the response returned for these errors by providing a custom error page template.
Content not found errors use a 404.html template. Access denied errors use 403.html . Invalid input errors use 400.html . Unexpected errors of other kinds use 500.html .
If a template for the specific error code is not found a template called error.html will be used instead. If you do not provide that template Datasette's default error.html template will be used.
The error template will be passed the following context:
status - integer
The integer HTTP status code, e.g. 404, 500, 403, 400.
error - string
Details of the specific error, usually a full sentence.
title - string or None
A title for the page representing the class of error. This is often None for errors that do not provide a title separate from their error message.",736,
190,SpatiaLite,"The SpatiaLite module for SQLite adds features for handling geographic and spatial data. For an example of what you can do with it, see the tutorial Building a location to time zone API with SpatiaLite .
To use it with Datasette, you need to install the mod_spatialite dynamic library. This can then be loaded into Datasette using the --load-extension command-line option.
Datasette can look for SpatiaLite in common installation locations if you run it like this:
datasette --load-extension=spatialite --setting default_allow_sql off
If SpatiaLite is in another location, use the full path to the extension instead:
datasette --setting default_allow_sql off \
--load-extension=/usr/local/lib/mod_spatialite.dylib",736,
191,Warning,"The SpatiaLite extension adds a large number of additional SQL functions , some of which are not be safe for untrusted users to execute: they may cause the Datasette server to crash.
You should not expose a SpatiaLite-enabled Datasette instance to the public internet without taking extra measures to secure it against potentially harmful SQL queries.
The following steps are recommended:
Disable arbitrary SQL queries by untrusted users. See Controlling the ability to execute arbitrary SQL for ways to do this. The easiest is to start Datasette with the datasette --setting default_allow_sql off option.
Define Canned queries with the SQL queries that use SpatiaLite functions that you want people to be able to execute.
The Datasette SpatiaLite tutorial includes detailed instructions for running SpatiaLite safely using these techniques",736,
192,Installation,,736,
193,Installing SpatiaLite on OS X,"The easiest way to install SpatiaLite on OS X is to use Homebrew .
brew update
brew install spatialite-tools
This will install the spatialite command-line tool and the mod_spatialite dynamic library.
You can now run Datasette like so:
datasette --load-extension=spatialite",736,
194,Installing SpatiaLite on Linux,"SpatiaLite is packaged for most Linux distributions.
apt install spatialite-bin libsqlite3-mod-spatialite
Depending on your distribution, you should be able to run Datasette something like this:
datasette --load-extension=/usr/lib/x86_64-linux-gnu/mod_spatialite.so
If you are unsure of the location of the module, try running locate mod_spatialite and see what comes back.",736,
195,Spatial indexing latitude/longitude columns,"Here's a recipe for taking a table with existing latitude and longitude columns, adding a SpatiaLite POINT geometry column to that table, populating the new column and then populating a spatial index:
import sqlite3
conn = sqlite3.connect(""museums.db"")
# Lead the spatialite extension:
conn.enable_load_extension(True)
conn.load_extension(""/usr/local/lib/mod_spatialite.dylib"")
# Initialize spatial metadata for this database:
conn.execute(""select InitSpatialMetadata(1)"")
# Add a geometry column called point_geom to our museums table:
conn.execute(
""SELECT AddGeometryColumn('museums', 'point_geom', 4326, 'POINT', 2);""
)
# Now update that geometry column with the lat/lon points
conn.execute(
""""""
UPDATE museums SET
point_geom = GeomFromText('POINT('||""longitude""||' '||""latitude""||')',4326);
""""""
)
# Now add a spatial index to that column
conn.execute(
'select CreateSpatialIndex(""museums"", ""point_geom"");'
)
# If you don't commit your changes will not be persisted:
conn.commit()
conn.close()",736,
196,Making use of a spatial index,"SpatiaLite spatial indexes are R*Trees. They allow you to run efficient bounding box queries using a sub-select, with a similar pattern to that used for Searches using custom SQL .
In the above example, the resulting index will be called idx_museums_point_geom . This takes the form of a SQLite virtual table. You can inspect its contents using the following query:
select * from idx_museums_point_geom limit 10;
Here's a live example: timezones-api.datasette.io/timezones/idx_timezones_Geometry
pkid
xmin
xmax
ymin
ymax
1
-8.601725578308105
-2.4930307865142822
4.162120819091797
10.74019718170166
2
-3.2607860565185547
1.27329421043396
4.539252281188965
11.174856185913086
3
32.997581481933594
47.98238754272461
3.3974475860595703
14.894054412841797
4
-8.66890811920166
11.997337341308594
18.9681453704834
37.296207427978516
5
36.43336486816406
43.300174713134766
12.354820251464844
18.070993423461914
You can now construct efficient bounding box queries that will make use of the index like this:
select * from museums where museums.rowid in (
SELECT pkid FROM idx_museums_point_geom
-- left-hand-edge of point > left-hand-edge of bbox (minx)
where xmin > :bbox_minx
-- right-hand-edge of point < right-hand-edge of bbox (maxx)
and xmax < :bbox_maxx
-- bottom-edge of point > bottom-edge of bbox (miny)
and ymin > :bbox_miny
-- top-edge of point < top-edge of bbox (maxy)
and ymax < :bbox_maxy
);
Spatial indexes can be created against polygon columns as well as point columns, in which case they will represent the minimum bounding rectangle of that polygon. This is useful for accelerating within queries, as seen in the Timezones API example.",736,
197,Importing shapefiles into SpatiaLite,"The shapefile format is a common format for distributing geospatial data. You can use the spatialite command-line tool to create a new database table from a shapefile.
Try it now with the North America shapefile available from the University of North Carolina Global River Database project. Download the file and unzip it (this will create files called narivs.dbf , narivs.prj , narivs.shp and narivs.shx in the current directory), then run the following:
spatialite rivers-database.db
SpatiaLite version ..: 4.3.0a Supported Extensions:
...
spatialite> .loadshp narivs rivers CP1252 23032
========
Loading shapefile at 'narivs' into SQLite table 'rivers'
...
Inserted 467973 rows into 'rivers' from SHAPEFILE
This will load the data from the narivs shapefile into a new database table called rivers .
Exit out of spatialite (using Ctrl+D ) and run Datasette against your new database like this:
datasette rivers-database.db \
--load-extension=/usr/local/lib/mod_spatialite.dylib
If you browse to http://localhost:8001/rivers-database/rivers you will see the new table... but the Geometry column will contain unreadable binary data (SpatiaLite uses a custom format based on WKB ).
The easiest way to turn this into semi-readable data is to use the SpatiaLite AsGeoJSON function. Try the following using the SQL query interface at http://localhost:8001/rivers-database :
select *, AsGeoJSON(Geometry) from rivers limit 10;
This will give you back an additional column of GeoJSON. You can copy and paste GeoJSON from this column into the debugging tool at geojson.io to visualize it on a map.
To see a more interesting example, try ordering the records with the longest geometry first. Since there are 467,000 rows in the table you will first need to increase the SQL time limit imposed by Datasette:
datasette rivers-database.db \
--load-extension=/usr/local/lib/mod_spatialite.dylib \
--setting sql_time_limit_ms 10000
Now try the following query:
select *, AsGeoJSON(Geometry) from rivers
order by length(Geometry) desc limit 10;",736,
198,Importing GeoJSON polygons using Shapely,"Another common form of polygon data is the GeoJSON format. This can be imported into SpatiaLite directly, or by using the Shapely Python library.
Who's On First is an excellent source of openly licensed GeoJSON polygons. Let's import the geographical polygon for Wales. First, we can use the Who's On First Spelunker tool to find the record for Wales:
spelunker.whosonfirst.org/id/404227475
That page includes a link to the GeoJSON record, which can be accessed here:
data.whosonfirst.org/404/227/475/404227475.geojson
Here's Python code to create a SQLite database, enable SpatiaLite, create a places table and then add a record for Wales:
import sqlite3
conn = sqlite3.connect(""places.db"")
# Enable SpatialLite extension
conn.enable_load_extension(True)
conn.load_extension(""/usr/local/lib/mod_spatialite.dylib"")
# Create the masic countries table
conn.execute(""select InitSpatialMetadata(1)"")
conn.execute(
""create table places (id integer primary key, name text);""
)
# Add a MULTIPOLYGON Geometry column
conn.execute(
""SELECT AddGeometryColumn('places', 'geom', 4326, 'MULTIPOLYGON', 2);""
)
# Add a spatial index against the new column
conn.execute(""SELECT CreateSpatialIndex('places', 'geom');"")
# Now populate the table
from shapely.geometry.multipolygon import MultiPolygon
from shapely.geometry import shape
import requests
geojson = requests.get(
""https://data.whosonfirst.org/404/227/475/404227475.geojson""
).json()
# Convert to ""Well Known Text"" format
wkt = shape(geojson[""geometry""]).wkt
# Insert and commit the record
conn.execute(
""INSERT INTO places (id, name, geom) VALUES(null, ?, GeomFromText(?, 4326))"",
(""Wales"", wkt),
)
conn.commit()",736,
199,Querying polygons using within(),"The within() SQL function can be used to check if a point is within a geometry:
select
name
from
places
where
within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom);
The GeomFromText() function takes a string of well-known text. Note that the order used here is longitude then latitude .
To run that same within() query in a way that benefits from the spatial index, use the following:
select
name
from
places
where
within(GeomFromText('POINT(-3.1724366 51.4704448)'), places.geom)
and rowid in (
SELECT pkid FROM idx_places_geom
where xmin < -3.1724366
and xmax > -3.1724366
and ymin < 51.4704448
and ymax > 51.4704448
);",736,
200,Metadata,"Data loves metadata. Any time you run Datasette you can optionally include a
YAML or JSON file with metadata about your databases and tables. Datasette will then
display that information in the web UI.
Run Datasette like this:
datasette database1.db database2.db --metadata metadata.yaml
Your metadata.yaml file can look something like this:
[[[cog
from metadata_doc import metadata_example
metadata_example(cog, {
""title"": ""Custom title for your index page"",
""description"": ""Some description text can go here"",
""license"": ""ODbL"",
""license_url"": ""https://opendatacommons.org/licenses/odbl/"",
""source"": ""Original Data Source"",
""source_url"": ""http://example.com/""
})
]]]
[[[end]]]
Choosing YAML over JSON adds support for multi-line strings and comments.
The above metadata will be displayed on the index page of your Datasette-powered
site. The source and license information will also be included in the footer of
every page served by Datasette.
Any special HTML characters in description will be escaped. If you want to
include HTML in your description, you can use a description_html property
instead.",736,
201,Per-database and per-table metadata,"Metadata at the top level of the file will be shown on the index page and in the
footer on every page of the site. The license and source is expected to apply to
all of your data.
You can also provide metadata at the per-database or per-table level, like this:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""source"": ""Alternative source"",
""source_url"": ""http://example.com/"",
""tables"": {
""example_table"": {
""description_html"": ""Custom table description"",
""license"": ""CC BY 3.0 US"",
""license_url"": ""https://creativecommons.org/licenses/by/3.0/us/""
}
}
}
}
})
]]]
[[[end]]]
Each of the top-level metadata fields can be used at the database and table level.",736,
202,"Source, license and about","The three visible metadata fields you can apply to everything, specific databases or specific tables are source, license and about. All three are optional.
source and source_url should be used to indicate where the underlying data came from.
license and license_url should be used to indicate the license under which the data can be used.
about and about_url can be used to link to further information about the project - an accompanying blog entry for example.
For each of these you can provide just the *_url field and Datasette will treat that as the default link label text and display the URL directly on the page.",736,
203,Column descriptions,"You can include descriptions for your columns by adding a ""columns"": {""name-of-column"": ""description-of-column""} block to your table metadata:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""columns"": {
""column1"": ""Description of column 1"",
""column2"": ""Description of column 2""
}
}
}
}
}
})
]]]
[[[end]]]
These will be displayed at the top of the table page, and will also show in the cog menu for each column.
You can see an example of how these look at latest.datasette.io/fixtures/roadside_attractions .",736,
204,Specifying units for a column,"Datasette supports attaching units to a column, which will be used when displaying
values from that column. SI prefixes will be used where appropriate.
Column units are configured in the metadata like so:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""units"": {
""column1"": ""metres"",
""column2"": ""Hz""
}
}
}
}
}
})
]]]
[[[end]]]
Units are interpreted using Pint , and you can see the full list of available units in
Pint's unit registry . You can also add custom units to the metadata, which will be
registered with Pint:
[[[cog
metadata_example(cog, {
""custom_units"": [
""decibel = [] = dB""
]
})
]]]
[[[end]]]",736,
205,Setting a default sort order,"By default Datasette tables are sorted by primary key. You can over-ride this default for a specific table using the ""sort"" or ""sort_desc"" metadata properties:
[[[cog
metadata_example(cog, {
""databases"": {
""mydatabase"": {
""tables"": {
""example_table"": {
""sort"": ""created""
}
}
}
}
})
]]]
[[[end]]]
Or use ""sort_desc"" to sort in descending order:
[[[cog
metadata_example(cog, {
""databases"": {
""mydatabase"": {
""tables"": {
""example_table"": {
""sort_desc"": ""created""
}
}
}
}
})
]]]
[[[end]]]",736,
206,Setting a custom page size,"Datasette defaults to displaying 100 rows per page, for both tables and views. You can change this default page size on a per-table or per-view basis using the ""size"" key in metadata.json :
[[[cog
metadata_example(cog, {
""databases"": {
""mydatabase"": {
""tables"": {
""example_table"": {
""size"": 10
}
}
}
}
})
]]]
[[[end]]]
This size can still be over-ridden by passing e.g. ?_size=50 in the query string.",736,
207,Setting which columns can be used for sorting,"Datasette allows any column to be used for sorting by default. If you need to
control which columns are available for sorting you can do so using the optional
sortable_columns key:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""sortable_columns"": [
""height"",
""weight""
]
}
}
}
}
})
]]]
[[[end]]]
This will restrict sorting of example_table to just the height and
weight columns.
You can also disable sorting entirely by setting ""sortable_columns"": []
You can use sortable_columns to enable specific sort orders for a view called name_of_view in the database my_database like so:
[[[cog
metadata_example(cog, {
""databases"": {
""my_database"": {
""tables"": {
""name_of_view"": {
""sortable_columns"": [
""clicks"",
""impressions""
]
}
}
}
}
})
]]]
[[[end]]]",736,
208,Specifying the label column for a table,"Datasette's HTML interface attempts to display foreign key references as
labelled hyperlinks. By default, it looks for referenced tables that only have
two columns: a primary key column and one other. It assumes that the second
column should be used as the link label.
If your table has more than two columns you can specify which column should be
used for the link label with the label_column property:
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""label_column"": ""title""
}
}
}
}
})
]]]
[[[end]]]",736,
209,Hiding tables,"You can hide tables from the database listing view (in the same way that FTS and
SpatiaLite tables are automatically hidden) using ""hidden"": true :
[[[cog
metadata_example(cog, {
""databases"": {
""database1"": {
""tables"": {
""example_table"": {
""hidden"": True
}
}
}
}
})
]]]
[[[end]]]",736,
210,Metadata reference,A full reference of every supported option in a metadata.json or metadata.yaml file.,736,
211,Top-level metadata,"""Top-level"" metadata refers to fields that can be specified at the root level of a metadata file. These attributes are meant to describe the entire Datasette instance.
The following are the full list of allowed top-level metadata fields:
title
description
description_html
license
license_url
source
source_url",736,
212,Database-level metadata,"""Database-level"" metadata refers to fields that can be specified for each database in a Datasette instance. These attributes should be listed under a database inside the ""databases"" field.
The following are the full list of allowed database-level metadata fields:
source
source_url
license
license_url
about
about_url",736,
213,Table-level metadata,"""Table-level"" metadata refers to fields that can be specified for each table in a Datasette instance. These attributes should be listed under a specific table using the ""tables"" field.
The following are the full list of allowed table-level metadata fields:
source
source_url
license
license_url
about
about_url
hidden
sort/sort_desc
size
sortable_columns
label_column
facets
fts_table
fts_pk
searchmode
columns",736,
214,Pages and API endpoints,"The Datasette web application offers a number of different pages that can be accessed to explore the data in question, each of which is accompanied by an equivalent JSON API.",736,
215,Top-level index,"The root page of any Datasette installation is an index page that lists all of the currently attached databases. Some examples:
fivethirtyeight.datasettes.com
global-power-plants.datasettes.com
register-of-members-interests.datasettes.com
Add /.json to the end of the URL for the JSON version of the underlying data:
fivethirtyeight.datasettes.com/.json
global-power-plants.datasettes.com/.json
register-of-members-interests.datasettes.com/.json",736,
216,Database,"Each database has a page listing the tables, views and canned queries available for that database. If the execute-sql permission is enabled (it's on by default) there will also be an interface for executing arbitrary SQL select queries against the data.
Examples:
fivethirtyeight.datasettes.com/fivethirtyeight
global-power-plants.datasettes.com/global-power-plants
The JSON version of this page provides programmatic access to the underlying data:
fivethirtyeight.datasettes.com/fivethirtyeight.json
global-power-plants.datasettes.com/global-power-plants.json",736,
217,Hidden tables,"Some tables listed on the database page are treated as hidden. Hidden tables are not completely invisible - they can be accessed through the ""hidden tables"" link at the bottom of the page. They are hidden because they represent low-level implementation details which are generally not useful to end-users of Datasette.
The following tables are hidden by default:
Any table with a name that starts with an underscore - this is a Datasette convention to help plugins easily hide their own internal tables.
Tables that have been configured as ""hidden"": true using Hiding tables .
*_fts tables that implement SQLite full-text search indexes.
Tables relating to the inner workings of the SpatiaLite SQLite extension.
sqlite_stat tables used to store statistics used by the query optimizer.",736,
218,Table,"The table page is the heart of Datasette: it allows users to interactively explore the contents of a database table, including sorting, filtering, Full-text search and applying Facets .
The HTML interface is worth spending some time exploring. As with other pages, you can return the JSON data by appending .json to the URL path, before any ? query string arguments.
The query string arguments are described in more detail here: Table arguments
You can also use the table page to interactively construct a SQL query - by applying different filters and a sort order for example - and then click the ""View and edit SQL"" link to see the SQL query that was used for the page and edit and re-submit it.
Some examples:
../items lists all of the line-items registered by UK MPs as potential conflicts of interest. It demonstrates Datasette's support for Full-text search .
../antiquities-act%2Factions_under_antiquities_act is an interface for exploring the ""actions under the antiquities act"" data table published by FiveThirtyEight.
../global-power-plants?country_long=United+Kingdom&primary_fuel=Gas is a filtered table page showing every Gas power plant in the United Kingdom. It includes some default facets (configured using its metadata.json ) and uses the datasette-cluster-map plugin to show a map of the results.",736,
219,Row,"Every row in every Datasette table has its own URL. This means individual records can be linked to directly.
Table cells with extremely long text contents are truncated on the table view according to the truncate_cells_html setting. If a cell has been truncated the full length version of that cell will be available on the row page.
Rows which are the targets of foreign key references from other tables will show a link to a filtered search for all records that reference that row. Here's an example from the Registers of Members Interests database:
../people/uk~2Eorg~2Epublicwhip~2Fperson~2F10001
Note that this URL includes the encoded primary key of the record.
Here's that same page as JSON:
../people/uk~2Eorg~2Epublicwhip~2Fperson~2F10001.json",736,
220,Testing plugins,"We recommend using pytest to write automated tests for your plugins.
If you use the template described in Starting an installable plugin using cookiecutter your plugin will start with a single test in your tests/ directory that looks like this:
from datasette.app import Datasette
import pytest
@pytest.mark.asyncio
async def test_plugin_is_installed():
datasette = Datasette(memory=True)
response = await datasette.client.get(""/-/plugins.json"")
assert response.status_code == 200
installed_plugins = {p[""name""] for p in response.json()}
assert (
""datasette-plugin-template-demo""
in installed_plugins
)
This test uses the datasette.client object to exercise a test instance of Datasette. datasette.client is a wrapper around the HTTPX Python library which can imitate HTTP requests using ASGI. This is the recommended way to write tests against a Datasette instance.
This test also uses the pytest-asyncio package to add support for async def test functions running under pytest.
You can install these packages like so:
pip install pytest pytest-asyncio
If you are building an installable package you can add them as test dependencies to your setup.py module like this:
setup(
name=""datasette-my-plugin"",
# ...
extras_require={""test"": [""pytest"", ""pytest-asyncio""]},
tests_require=[""datasette-my-plugin[test]""],
)
You can then install the test dependencies like so:
pip install -e '.[test]'
Then run the tests using pytest like so:
pytest",736,
221,Setting up a Datasette test instance,"The above example shows the easiest way to start writing tests against a Datasette instance:
from datasette.app import Datasette
import pytest
@pytest.mark.asyncio
async def test_plugin_is_installed():
datasette = Datasette(memory=True)
response = await datasette.client.get(""/-/plugins.json"")
assert response.status_code == 200
Creating a Datasette() instance like this as useful shortcut in tests, but there is one detail you need to be aware of. It's important to ensure that the async method .invoke_startup() is called on that instance. You can do that like this:
datasette = Datasette(memory=True)
await datasette.invoke_startup()
This method registers any startup(datasette) or prepare_jinja2_environment(env, datasette) plugins that might themselves need to make async calls.
If you are using await datasette.client.get() and similar methods then you don't need to worry about this - Datasette automatically calls invoke_startup() the first time it handles a request.",736,
222,Using datasette.client in tests,"The datasette.client mechanism is designed for use in tests. It provides access to a pre-configured HTTPX async client instance that can make GET, POST and other HTTP requests against a Datasette instance from inside a test.
A simple test looks like this:
@pytest.mark.asyncio
async def test_homepage():
ds = Datasette(memory=True)
response = await ds.client.get(""/"")
html = response.text
assert ""
"" in html
Or for a JSON API:
@pytest.mark.asyncio
async def test_actor_is_null():
ds = Datasette(memory=True)
response = await ds.client.get(""/-/actor.json"")
assert response.json() == {""actor"": None}
To make requests as an authenticated actor, create a signed ds_cookie using the datasette.client.actor_cookie() helper function and pass it in cookies= like this:
@pytest.mark.asyncio
async def test_signed_cookie_actor():
ds = Datasette(memory=True)
cookies = {""ds_actor"": ds.client.actor_cookie({""id"": ""root""})}
response = await ds.client.get(""/-/actor.json"", cookies=cookies)
assert response.json() == {""actor"": {""id"": ""root""}}",736,
223,Using pdb for errors thrown inside Datasette,"If an exception occurs within Datasette itself during a test, the response returned to your plugin will have a response.status_code value of 500.
You can add pdb=True to the Datasette constructor to drop into a Python debugger session inside your test run instead of getting back a 500 response code. This is equivalent to running the datasette command-line tool with the --pdb option.
Here's what that looks like in a test function:
def test_that_opens_the_debugger_or_errors():
ds = Datasette([db_path], pdb=True)
response = await ds.client.get(""/"")
If you use this pattern you will need to run pytest with the -s option to avoid capturing stdin/stdout in order to interact with the debugger prompt.",736,
224,Using pytest fixtures,"Pytest fixtures can be used to create initial testable objects which can then be used by multiple tests.
A common pattern for Datasette plugins is to create a fixture which sets up a temporary test database and wraps it in a Datasette instance.
Here's an example that uses the sqlite-utils library to populate a temporary test database. It also sets the title of that table using a simulated metadata.json configuration:
from datasette.app import Datasette
import pytest
import sqlite_utils
@pytest.fixture(scope=""session"")
def datasette(tmp_path_factory):
db_directory = tmp_path_factory.mktemp(""dbs"")
db_path = db_directory / ""test.db""
db = sqlite_utils.Database(db_path)
db[""dogs""].insert_all(
[
{""id"": 1, ""name"": ""Cleo"", ""age"": 5},
{""id"": 2, ""name"": ""Pancakes"", ""age"": 4},
],
pk=""id"",
)
datasette = Datasette(
[db_path],
metadata={
""databases"": {
""test"": {
""tables"": {
""dogs"": {""title"": ""Some dogs""}
}
}
}
},
)
return datasette
@pytest.mark.asyncio
async def test_example_table_json(datasette):
response = await datasette.client.get(
""/test/dogs.json?_shape=array""
)
assert response.status_code == 200
assert response.json() == [
{""id"": 1, ""name"": ""Cleo"", ""age"": 5},
{""id"": 2, ""name"": ""Pancakes"", ""age"": 4},
]
@pytest.mark.asyncio
async def test_example_table_html(datasette):
response = await datasette.client.get(""/test/dogs"")
assert "">Some dogs
"" in response.text
Here the datasette() function defines the fixture, which is than automatically passed to the two test functions based on pytest automatically matching their datasette function parameters.
The @pytest.fixture(scope=""session"") line here ensures the fixture is reused for the full pytest execution session. This means that the temporary database file will be created once and reused for each test.
If you want to create that test database repeatedly for every individual test function, write the fixture function like this instead. You may want to do this if your plugin modifies the database contents in some way:
@pytest.fixture
def datasette(tmp_path_factory):
# This fixture will be executed repeatedly for every test
...",736,
225,Testing outbound HTTP calls with pytest-httpx,"If your plugin makes outbound HTTP calls - for example datasette-auth-github or datasette-import-table - you may need to mock those HTTP requests in your tests.
The pytest-httpx package is a useful library for mocking calls. It can be tricky to use with Datasette though since it mocks all HTTPX requests, and Datasette's own testing mechanism uses HTTPX internally.
To avoid breaking your tests, you can return [""localhost""] from the non_mocked_hosts() fixture.
As an example, here's a very simple plugin which executes an HTTP response and returns the resulting content:
from datasette import hookimpl
from datasette.utils.asgi import Response
import httpx
@hookimpl
def register_routes():
return [
(r""^/-/fetch-url$"", fetch_url),
]
async def fetch_url(datasette, request):
if request.method == ""GET"":
return Response.html(
""""""
"""""".format(
request.scope[""csrftoken""]()
)
)
vars = await request.post_vars()
url = vars[""url""]
return Response.text(httpx.get(url).text)
Here's a test for that plugin that mocks the HTTPX outbound request:
from datasette.app import Datasette
import pytest
@pytest.fixture
def non_mocked_hosts():
# This ensures httpx-mock will not affect Datasette's own
# httpx calls made in the tests by datasette.client:
return [""localhost""]
async def test_outbound_http_call(httpx_mock):
httpx_mock.add_response(
url=""https://www.example.com/"",
text=""Hello world"",
)
datasette = Datasette([], memory=True)
response = await datasette.client.post(
""/-/fetch-url"",
data={""url"": ""https://www.example.com/""},
)
assert response.text == ""Hello world""
outbound_request = httpx_mock.get_request()
assert (
outbound_request.url == ""https://www.example.com/""
)",736,
226,Registering a plugin for the duration of a test,"When writing tests for plugins you may find it useful to register a test plugin just for the duration of a single test. You can do this using pm.register() and pm.unregister() like this:
from datasette import hookimpl
from datasette.app import Datasette
from datasette.plugins import pm
import pytest
@pytest.mark.asyncio
async def test_using_test_plugin():
class TestPlugin:
__name__ = ""TestPlugin""
# Use hookimpl and method names to register hooks
@hookimpl
def register_routes(self):
return [
(r""^/error$"", lambda: 1 / 0),
]
pm.register(TestPlugin(), name=""undo"")
try:
# The test implementation goes here
datasette = Datasette()
response = await datasette.client.get(""/error"")
assert response.status_code == 500
finally:
pm.unregister(name=""undo"")
To reuse the same temporary plugin in multiple tests, you can register it inside a fixture in your conftest.py file like this:
from datasette import hookimpl
from datasette.app import Datasette
from datasette.plugins import pm
import pytest
import pytest_asyncio
@pytest_asyncio.fixture
async def datasette_with_plugin():
class TestPlugin:
__name__ = ""TestPlugin""
@hookimpl
def register_routes(self):
return [
(r""^/error$"", lambda: 1 / 0),
]
pm.register(TestPlugin(), name=""undo"")
try:
yield Datasette()
finally:
pm.unregister(name=""undo"")
Note the yield statement here - this ensures that the finally: block that unregisters the plugin is executed only after the test function itself has completed.
Then in a test:
@pytest.mark.asyncio
async def test_error(datasette_with_plugin):
response = await datasette_with_plugin.client.get(""/error"")
assert response.status_code == 500",736,
227,Internals for plugins,Many Plugin hooks are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.,736,
228,Request object,"The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties:
.scope - dictionary
The ASGI scope that was used to construct this request, described in the ASGI HTTP connection scope specification.
.method - string
The HTTP method for this request, usually GET or POST .
.url - string
The full URL for this request, e.g. https://latest.datasette.io/fixtures .
.scheme - string
The request scheme - usually https or http .
.headers - dictionary (str -> str)
A dictionary of incoming HTTP request headers. Header names have been converted to lowercase.
.cookies - dictionary (str -> str)
A dictionary of incoming cookies
.host - string
The host header from the incoming request, e.g. latest.datasette.io or localhost .
.path - string
The path of the request excluding the query string, e.g. /fixtures .
.full_path - string
The path of the request including the query string if one is present, e.g. /fixtures?sql=select+sqlite_version() .
.query_string - string
The query string component of the request, without the ? - e.g. name__contains=sam&age__gt=10 .
.args - MultiParams
An object representing the parsed query string parameters, see below.
.url_vars - dictionary (str -> str)
Variables extracted from the URL path, if that path was defined using a regular expression. See register_routes(datasette) .
.actor - dictionary (str -> Any) or None
The currently authenticated actor (see actors ), or None if the request is unauthenticated.
The object also has two awaitable methods:
await request.post_vars() - dictionary
Returns a dictionary of form variables that were submitted in the request body via POST . Don't forget to read about CSRF protection !
await request.post_body() - bytes
Returns the un-parsed body of a request submitted by POST - useful for things like incoming JSON data.
And a class method that can be used to create fake request objects for use in tests:
fake(path_with_query_string, method=""GET"", scheme=""http"", url_vars=None)
Returns a Request instance for the specified path and method. For example:
from datasette import Request
from pprint import pprint
request = Request.fake(
""/fixtures/facetable/"",
url_vars={""database"": ""fixtures"", ""table"": ""facetable""},
)
pprint(request.scope)
This outputs:
{'http_version': '1.1',
'method': 'GET',
'path': '/fixtures/facetable/',
'query_string': b'',
'raw_path': b'/fixtures/facetable/',
'scheme': 'http',
'type': 'http',
'url_route': {'kwargs': {'database': 'fixtures', 'table': 'facetable'}}}",736,
229,The MultiParams class,"request.args is a MultiParams object - a dictionary-like object which provides access to query string parameters that may have multiple values.
Consider the query string ?foo=1&foo=2&bar=3 - with two values for foo and one value for bar .
request.args[key] - string
Returns the first value for that key, or raises a KeyError if the key is missing. For the above example request.args[""foo""] would return ""1"" .
request.args.get(key) - string or None
Returns the first value for that key, or None if the key is missing. Pass a second argument to specify a different default, e.g. q = request.args.get(""q"", """") .
request.args.getlist(key) - list of strings
Returns the list of strings for that key. request.args.getlist(""foo"") would return [""1"", ""2""] in the above example. request.args.getlist(""bar"") would return [""3""] . If the key is missing an empty list will be returned.
request.args.keys() - list of strings
Returns the list of available keys - for the example this would be [""foo"", ""bar""] .
key in request.args - True or False
You can use if key in request.args to check if a key is present.
for key in request.args - iterator
This lets you loop through every available key.
len(request.args) - integer
Returns the number of keys.",736,
230,Response class,"The Response class can be returned from view functions that have been registered using the register_routes(datasette) hook.
The Response() constructor takes the following arguments:
body - string
The body of the response.
status - integer (optional)
The HTTP status - defaults to 200.
headers - dictionary (optional)
A dictionary of extra HTTP headers, e.g. {""x-hello"": ""world""} .
content_type - string (optional)
The content-type for the response. Defaults to text/plain .
For example:
from datasette.utils.asgi import Response
response = Response(
""This is XML"",
content_type=""application/xml; charset=utf-8"",
)
The quickest way to create responses is using the Response.text(...) , Response.html(...) , Response.json(...) or Response.redirect(...) helper methods:
from datasette.utils.asgi import Response
html_response = Response.html(""This is HTML"")
json_response = Response.json({""this_is"": ""json""})
text_response = Response.text(
""This will become utf-8 encoded text""
)
# Redirects are served as 302, unless you pass status=301:
redirect_response = Response.redirect(
""https://latest.datasette.io/""
)
Each of these responses will use the correct corresponding content-type - text/html; charset=utf-8 , application/json; charset=utf-8 or text/plain; charset=utf-8 respectively.
Each of the helper methods take optional status= and headers= arguments, documented above.",736,
231,Returning a response with .asgi_send(send),"In most cases you will return Response objects from your own view functions. You can also use a Response instance to respond at a lower level via ASGI, for example if you are writing code that uses the asgi_wrapper(datasette) hook.
Create a Response object and then use await response.asgi_send(send) , passing the ASGI send function. For example:
async def require_authorization(scope, receive, send):
response = Response.text(
""401 Authorization Required"",
headers={
""www-authenticate"": 'Basic realm=""Datasette"", charset=""UTF-8""'
},
status=401,
)
await response.asgi_send(send)",736,
232,Setting cookies with response.set_cookie(),"To set cookies on the response, use the response.set_cookie(...) method. The method signature looks like this:
def set_cookie(
self,
key,
value="""",
max_age=None,
expires=None,
path=""/"",
domain=None,
secure=False,
httponly=False,
samesite=""lax"",
): ...
You can use this with datasette.sign() to set signed cookies. Here's how you would set the ds_actor cookie for use with Datasette authentication :
response = Response.redirect(""/"")
response.set_cookie(
""ds_actor"",
datasette.sign({""a"": {""id"": ""cleopaws""}}, ""actor""),
)
return response",736,
233,Datasette class,"This object is an instance of the Datasette class, passed to many plugin hooks as an argument called datasette .
You can create your own instance of this - for example to help write tests for a plugin - like so:
from datasette.app import Datasette
# With no arguments a single in-memory database will be attached
datasette = Datasette()
# The files= argument can load files from disk
datasette = Datasette(files=[""/path/to/my-database.db""])
# Pass metadata as a JSON dictionary like this
datasette = Datasette(
files=[""/path/to/my-database.db""],
metadata={
""databases"": {
""my-database"": {
""description"": ""This is my database""
}
}
},
)
Constructor parameters include:
files=[...] - a list of database files to open
immutables=[...] - a list of database files to open in immutable mode
metadata={...} - a dictionary of Metadata
config_dir=... - the configuration directory to use, stored in datasette.config_dir",736,
234,.databases,"Property exposing a collections.OrderedDict of databases currently connected to Datasette.
The dictionary keys are the name of the database that is used in the URL - e.g. /fixtures would have a key of ""fixtures"" . The values are Database class instances.
All databases are listed, irrespective of user permissions.",736,
235,.permissions,"Property exposing a dictionary of permissions that have been registered using the register_permissions(datasette) plugin hook.
The dictionary keys are the permission names - e.g. view-instance - and the values are Permission() objects describing the permission. Here is a description of that object .",736,
236,".plugin_config(plugin_name, database=None, table=None)","plugin_name - string
The name of the plugin to look up configuration for. Usually this is something similar to datasette-cluster-map .
database - None or string
The database the user is interacting with.
table - None or string
The table the user is interacting with.
This method lets you read plugin configuration values that were set in datasette.yaml . See Writing plugins that accept configuration for full details of how this method should be used.
The return value will be the value from the configuration file - usually a dictionary.
If the plugin is not configured the return value will be None .",736,
237,"await .render_template(template, context=None, request=None)","template - string, list of strings or jinja2.Template
The template file to be rendered, e.g. my_plugin.html . Datasette will search for this file first in the --template-dir= location, if it was specified - then in the plugin's bundled templates and finally in Datasette's set of default templates.
If this is a list of template file names then the first one that exists will be loaded and rendered.
If this is a Jinja Template object it will be used directly.
context - None or a Python dictionary
The context variables to pass to the template.
request - request object or None
If you pass a Datasette request object here it will be made available to the template.
Renders a Jinja template using Datasette's preconfigured instance of Jinja and returns the resulting string. The template will have access to Datasette's default template functions and any functions that have been made available by other plugins.",736,
238,await .actors_from_ids(actor_ids),"actor_ids - list of strings or integers
A list of actor IDs to look up.
Returns a dictionary, where the keys are the IDs passed to it and the values are the corresponding actor dictionaries.
This method is mainly designed to be used with plugins. See the actors_from_ids(datasette, actor_ids) documentation for details.
If no plugins that implement that hook are installed, the default return value looks like this:
{
""1"": {""id"": ""1""},
""2"": {""id"": ""2""}
}",736,
239,"await .permission_allowed(actor, action, resource=None, default=...)","actor - dictionary
The authenticated actor. This is usually request.actor .
action - string
The name of the action that is being permission checked.
resource - string or tuple, optional
The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource.
default - optional: True, False or None
What value should be returned by default if nothing provides an opinion on this permission check.
Set to True for default allow or False for default deny.
If not specified the default from the Permission() tuple that was registered using register_permissions(datasette) will be used.
Check if the given actor has permission to perform the given action on the given resource.
Some permission checks are carried out against rules defined in datasette.yaml , while other custom permissions may be decided by plugins that implement the permission_allowed(datasette, actor, action, resource) plugin hook.
If neither metadata.json nor any of the plugins provide an answer to the permission query the default argument will be returned.
See Built-in permissions for a full list of permission actions included in Datasette core.",736,
240,"await .ensure_permissions(actor, permissions)","actor - dictionary
The authenticated actor. This is usually request.actor .
permissions - list
A list of permissions to check. Each permission in that list can be a string action name or a 2-tuple of (action, resource) .
This method allows multiple permissions to be checked at once. It raises a datasette.Forbidden exception if any of the checks are denied before one of them is explicitly granted.
This is useful when you need to check multiple permissions at once. For example, an actor should be able to view a table if either one of the following checks returns True or not a single one of them returns False :
await datasette.ensure_permissions(
request.actor,
[
(""view-table"", (database, table)),
(""view-database"", database),
""view-instance"",
],
)",736,
241,"await .check_visibility(actor, action=None, resource=None, permissions=None)","actor - dictionary
The authenticated actor. This is usually request.actor .
action - string, optional
The name of the action that is being permission checked.
resource - string or tuple, optional
The resource, e.g. the name of the database, or a tuple of two strings containing the name of the database and the name of the table. Only some permissions apply to a resource.
permissions - list of action strings or (action, resource) tuples, optional
Provide this instead of action and resource to check multiple permissions at once.
This convenience method can be used to answer the question ""should this item be considered private, in that it is visible to me but it is not visible to anonymous users?""
It returns a tuple of two booleans, (visible, private) . visible indicates if the actor can see this resource. private will be True if an anonymous user would not be able to view the resource.
This example checks if the user can access a specific table, and sets private so that a padlock icon can later be displayed:
visible, private = await datasette.check_visibility(
request.actor,
action=""view-table"",
resource=(database, table),
)
The following example runs three checks in a row, similar to await .ensure_permissions(actor, permissions) . If any of the checks are denied before one of them is explicitly granted then visible will be False . private will be True if an anonymous user would not be able to view the resource.
visible, private = await datasette.check_visibility(
request.actor,
permissions=[
(""view-table"", (database, table)),
(""view-database"", database),
""view-instance"",
],
)",736,
242,".create_token(actor_id, expires_after=None, restrict_all=None, restrict_database=None, restrict_resource=None)","actor_id - string
The ID of the actor to create a token for.
expires_after - int, optional
The number of seconds after which the token should expire.
restrict_all - iterable, optional
A list of actions that this token should be restricted to across all databases and resources.
restrict_database - dict, optional
For restricting actions within specific databases, e.g. {""mydb"": [""view-table"", ""view-query""]} .
restrict_resource - dict, optional
For restricting actions to specific resources (tables, SQL views and Canned queries ) within a database. For example: {""mydb"": {""mytable"": [""insert-row"", ""update-row""]}} .
This method returns a signed API token of the format dstok_... which can be used to authenticate requests to the Datasette API.
All tokens must have an actor_id string indicating the ID of the actor which the token will act on behalf of.
Tokens default to lasting forever, but can be set to expire after a given number of seconds using the expires_after argument. The following code creates a token for user1 that will expire after an hour:
token = datasette.create_token(
actor_id=""user1"",
expires_after=3600,
)
The three restrict_* arguments can be used to create a token that has additional restrictions beyond what the associated actor is allowed to do.
The following example creates a token that can access view-instance and view-table across everything, can additionally use view-query for anything in the docs database and is allowed to execute insert-row and update-row in the attachments table in that database:
token = datasette.create_token(
actor_id=""user1"",
restrict_all=(""view-instance"", ""view-table""),
restrict_database={""docs"": (""view-query"",)},
restrict_resource={
""docs"": {
""attachments"": (""insert-row"", ""update-row"")
}
},
)",736,
243,.get_permission(name_or_abbr),"name_or_abbr - string
The name or abbreviation of the permission to look up, e.g. view-table or vt .
Returns a Permission object representing the permission, or raises a KeyError if one is not found.",736,
244,.get_database(name),"name - string, optional
The name of the database - optional.
Returns the specified database object. Raises a KeyError if the database does not exist. Call this method without an argument to return the first connected database.",736,
245,.get_internal_database(),Returns a database object for reading and writing to the private internal database .,736,
246,".add_database(db, name=None, route=None)","db - datasette.database.Database instance
The database to be attached.
name - string, optional
The name to be used for this database . If not specified Datasette will pick one based on the filename or memory name.
route - string, optional
This will be used in the URL path. If not specified, it will default to the same thing as the name .
The datasette.add_database(db) method lets you add a new database to the current Datasette instance.
The db parameter should be an instance of the datasette.database.Database class. For example:
from datasette.database import Database
datasette.add_database(
Database(
datasette,
path=""path/to/my-new-database.db"",
)
)
This will add a mutable database and serve it at /my-new-database .
Use is_mutable=False to add an immutable database.
.add_database() returns the Database instance, with its name set as the database.name attribute. Any time you are working with a newly added database you should use the return value of .add_database() , for example:
db = datasette.add_database(
Database(datasette, memory_name=""statistics"")
)
await db.execute_write(
""CREATE TABLE foo(id integer primary key)""
)",736,
247,.add_memory_database(name),"Adds a shared in-memory database with the specified name:
datasette.add_memory_database(""statistics"")
This is a shortcut for the following:
from datasette.database import Database
datasette.add_database(
Database(datasette, memory_name=""statistics"")
)
Using either of these pattern will result in the in-memory database being served at /statistics .",736,
248,.remove_database(name),"name - string
The name of the database to be removed.
This removes a database that has been previously added. name= is the unique name of that database.",736,
249,await .track_event(event),"event - Event
An instance of a subclass of datasette.events.Event .
Plugins can call this to track events, using classes they have previously registered. See Event tracking for details.
The event will then be passed to all plugins that have registered to receive events using the track_event(datasette, event) hook.
Example usage, assuming the plugin has previously registered the BanUserEvent class:
await datasette.track_event(
BanUserEvent(user={""id"": 1, ""username"": ""cleverbot""})
)",736,
250,".sign(value, namespace=""default"")","value - any serializable type
The value to be signed.
namespace - string, optional
An alternative namespace, see the itsdangerous salt documentation .
Utility method for signing values, such that you can safely pass data to and from an untrusted environment. This is a wrapper around the itsdangerous library.
This method returns a signed string, which can be decoded and verified using .unsign(value, namespace=""default"") .",736,
251,".unsign(value, namespace=""default"")","signed - any serializable type
The signed string that was created using .sign(value, namespace=""default"") .
namespace - string, optional
The alternative namespace, if one was used.
Returns the original, decoded object that was passed to .sign(value, namespace=""default"") . If the signature is not valid this raises a itsdangerous.BadSignature exception.",736,
252,".add_message(request, message, type=datasette.INFO)","request - Request
The current Request object
message - string
The message string
type - constant, optional
The message type - datasette.INFO , datasette.WARNING or datasette.ERROR
Datasette's flash messaging mechanism allows you to add a message that will be displayed to the user on the next page that they visit. Messages are persisted in a ds_messages cookie. This method adds a message to that cookie.
You can try out these messages (including the different visual styling of the three message types) using the /-/messages debugging tool.",736,
253,".absolute_url(request, path)","request - Request
The current Request object
path - string
A path, for example /dbname/table.json
Returns the absolute URL for the given path, including the protocol and host. For example:
absolute_url = datasette.absolute_url(
request, ""/dbname/table.json""
)
# Would return ""http://localhost:8001/dbname/table.json""
The current request object is used to determine the hostname and protocol that should be used for the returned URL. The force_https_urls configuration setting is taken into account.",736,
254,.setting(key),"key - string
The name of the setting, e.g. base_url .
Returns the configured value for the specified setting . This can be a string, boolean or integer depending on the requested setting.
For example:
downloads_are_allowed = datasette.setting(""allow_download"")",736,
255,.resolve_database(request),"request - Request object
A request object
If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a database named group, you can use this method to resolve the database object.
This returns a Database instance.
If the database cannot be found, it raises a datasette.utils.asgi.DatabaseNotFound exception - which is a subclass of datasette.utils.asgi.NotFound with a .database_name attribute set to the name of the database that was requested.",736,
256,.resolve_table(request),"request - Request object
A request object
This assumes that the regular expression for your route declares both a database and a table named group.
It returns a ResolvedTable named tuple instance with the following fields:
db - Database
The database object
table - string
The name of the table (or view)
is_view - boolean
True if this is a view, False if it is a table
If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception.
If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception - a subclass of datasette.utils.asgi.NotFound with .database_name and .table attributes.",736,
257,.resolve_row(request),"request - Request object
A request object
This method assumes your route declares named groups for database , table and pks .
It returns a ResolvedRow named tuple instance with the following fields:
db - Database
The database object
table - string
The name of the table
sql - string
SQL snippet that can be used in a WHERE clause to select the row
params - dict
Parameters that should be passed to the SQL query
pks - list
List of primary key column names
pk_values - list
List of primary key values decoded from the URL
row - sqlite3.Row
The row itself
If the database or table cannot be found it raises a datasette.utils.asgi.DatabaseNotFound exception.
If the table does not exist it raises a datasette.utils.asgi.TableNotFound exception.
If the row cannot be found it raises a datasette.utils.asgi.RowNotFound exception. This has .database_name , .table and .pk_values attributes, extracted from the request path.",736,
258,datasette.client,"Plugins can make internal simulated HTTP requests to the Datasette instance within which they are running. This ensures that all of Datasette's external JSON APIs are also available to plugins, while avoiding the overhead of making an external HTTP call to access those APIs.
The datasette.client object is a wrapper around the HTTPX Python library , providing an async-friendly API that is similar to the widely used Requests library .
It offers the following methods:
await datasette.client.get(path, **kwargs) - returns HTTPX Response
Execute an internal GET request against that path.
await datasette.client.post(path, **kwargs) - returns HTTPX Response
Execute an internal POST request. Use data={""name"": ""value""} to pass form parameters.
await datasette.client.options(path, **kwargs) - returns HTTPX Response
Execute an internal OPTIONS request.
await datasette.client.head(path, **kwargs) - returns HTTPX Response
Execute an internal HEAD request.
await datasette.client.put(path, **kwargs) - returns HTTPX Response
Execute an internal PUT request.
await datasette.client.patch(path, **kwargs) - returns HTTPX Response
Execute an internal PATCH request.
await datasette.client.delete(path, **kwargs) - returns HTTPX Response
Execute an internal DELETE request.
await datasette.client.request(method, path, **kwargs) - returns HTTPX Response
Execute an internal request with the given HTTP method against that path.
These methods can be used with datasette.urls - for example:
table_json = (
await datasette.client.get(
datasette.urls.table(
""fixtures"", ""facetable"", format=""json""
)
)
).json()
datasette.client methods automatically take the current base_url setting into account, whether or not you use the datasette.urls family of methods to construct the path.
For documentation on available **kwargs options and the shape of the HTTPX Response object refer to the HTTPX Async documentation .",736,
259,datasette.urls,"The datasette.urls object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any base_url configuration setting that might be in effect.
datasette.urls.instance(format=None)
Returns the URL to the Datasette instance root page. This is usually ""/"" .
datasette.urls.path(path, format=None)
Takes a path and returns the full path, taking base_url into account.
For example, datasette.urls.path(""-/logout"") will return the path to the logout page, which will be ""/-/logout"" by default or /prefix-path/-/logout if base_url is set to /prefix-path/
datasette.urls.logout()
Returns the URL to the logout page, usually ""/-/logout""
datasette.urls.static(path)
Returns the URL of one of Datasette's default static assets, for example ""/-/static/app.css""
datasette.urls.static_plugins(plugin_name, path)
Returns the URL of one of the static assets belonging to a plugin.
datasette.urls.static_plugins(""datasette_cluster_map"", ""datasette-cluster-map.js"") would return ""/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js""
datasette.urls.static(path)
Returns the URL of one of Datasette's default static assets, for example ""/-/static/app.css""
datasette.urls.database(database_name, format=None)
Returns the URL to a database page, for example ""/fixtures""
datasette.urls.table(database_name, table_name, format=None)
Returns the URL to a table page, for example ""/fixtures/facetable""
datasette.urls.query(database_name, query_name, format=None)
Returns the URL to a query page, for example ""/fixtures/pragma_cache_size""
These functions can be accessed via the {{ urls }} object in Datasette templates, for example:
HomepageFixtures databasefacetable tablepragma_cache_size query
Use the format=""json"" (or ""csv"" or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with .json added on the end.
These methods each return a datasette.utils.PrefixedUrlString object, which is a subclass of the Python str type. This allows the logic that considers the base_url setting to detect if that prefix has already been applied to the path.",736,
260,Database class,"Instances of the Database class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.",736,
261,"Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None)","The Database() constructor can be used by plugins, in conjunction with .add_database(db, name=None, route=None) , to create and register new databases.
The arguments are as follows:
ds - Datasette class (required)
The Datasette instance you are attaching this database to.
path - string
Path to a SQLite database file on disk.
is_mutable - boolean
Set this to False to cause Datasette to open the file in immutable mode.
is_memory - boolean
Use this to create non-shared memory connections.
memory_name - string or None
Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process.
The first argument is the datasette instance you are attaching to, the second is a path= , then is_mutable and is_memory are both optional arguments.",736,
262,db.hash,"If the database was opened in immutable mode, this property returns the 64 character SHA-256 hash of the database contents as a string. Otherwise it returns None .",736,
263,"await db.execute(sql, ...)","Executes a SQL query against the database and returns the resulting rows (see Results ).
sql - string (required)
The SQL query to execute. This can include ? or :named parameters.
params - list or dict
A list or dictionary of values to use for the parameters. List for ? , dictionary for :named .
truncate - boolean
Should the rows returned by the query be truncated at the maximum page size? Defaults to True , set this to False to disable truncation.
custom_time_limit - integer ms
A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a dataette.database.QueryInterrupted exception.
page_size - integer
Set a custom page size for truncation, over-riding the configured Datasette default.
log_sql_errors - boolean
Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to True .",736,
264,Results,"The db.execute() method returns a single Results object. This can be used to access the rows returned by the query.
Iterating over a Results object will yield SQLite Row objects . Each of these can be treated as a tuple or can be accessed using row[""column""] syntax:
info = []
results = await db.execute(""select name from sqlite_master"")
for row in results:
info.append(row[""name""])
The Results object also has the following properties and methods:
.truncated - boolean
Indicates if this query was truncated - if it returned more results than the specified page_size . If this is true then the results object will only provide access to the first page_size rows in the query result. You can disable truncation by passing truncate=False to the db.query() method.
.columns - list of strings
A list of column names returned by the query.
.rows - list of sqlite3.Row
This property provides direct access to the list of rows returned by the database. You can access specific rows by index using results.rows[0] .
.first() - row or None
Returns the first row in the results, or None if no rows were returned.
.single_value()
Returns the value of the first column of the first row of results - but only if the query returned a single row with a single column. Raises a datasette.database.MultipleValues exception otherwise.
.__len__()
Calling len(results) returns the (truncated) number of returned results.",736,
265,await db.execute_fn(fn),"Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the await .
Example usage:
def get_version(conn):
return conn.execute(
""select sqlite_version()""
).fetchall()[0][0]
version = await db.execute_fn(get_version)",736,
266,"await db.execute_write(sql, params=None, block=True)","SQLite only allows one database connection to write at a time. Datasette handles this for you by maintaining a queue of writes to be executed against a given database. Plugins can submit write operations to this queue and they will be executed in the order in which they are received.
This method can be used to queue up a non-SELECT SQL query to be executed against a single write connection to the database.
You can pass additional SQL parameters as a tuple or dictionary.
The method will block until the operation is completed, and the return value will be the return from calling conn.execute(...) using the underlying sqlite3 Python library.
If you pass block=False this behavior changes to ""fire and forget"" - queries will be added to the write queue and executed in a separate thread while your code can continue to do other things. The method will return a UUID representing the queued task.
Each call to execute_write() will be executed inside a transaction.",736,
267,"await db.execute_write_script(sql, block=True)","Like execute_write() but can be used to send multiple SQL statements in a single string separated by semicolons, using the sqlite3 conn.executescript() method.
Each call to execute_write_script() will be executed inside a transaction.",736,
268,"await db.execute_write_many(sql, params_seq, block=True)","Like execute_write() but uses the sqlite3 conn.executemany() method. This will efficiently execute the same SQL statement against each of the parameters in the params_seq iterator, for example:
await db.execute_write_many(
""insert into characters (id, name) values (?, ?)"",
[(1, ""Melanie""), (2, ""Selma""), (2, ""Viktor"")],
)
Each call to execute_write_many() will be executed inside a transaction.",736,
269,"await db.execute_write_fn(fn, block=True, transaction=True)","This method works like .execute_write() , but instead of a SQL statement you give it a callable Python function. Your function will be queued up and then called when the write connection is available, passing that connection as the argument to the function.
The function can then perform multiple actions, safe in the knowledge that it has exclusive access to the single writable connection for as long as it is executing.
fn needs to be a regular function, not an async def function.
For example:
def delete_and_return_count(conn):
conn.execute(""delete from some_table where id > 5"")
return conn.execute(
""select count(*) from some_table""
).fetchone()[0]
try:
num_rows_left = await database.execute_write_fn(
delete_and_return_count
)
except Exception as e:
print(""An error occurred:"", e)
The value returned from await database.execute_write_fn(...) will be the return value from your function.
If your function raises an exception that exception will be propagated up to the await line.
By default your function will be executed inside a transaction. You can pass transaction=False to disable this behavior, though if you do that you should be careful to manually apply transactions - ideally using the with conn: pattern, or you may see OperationalError: database table is locked errors.
If you specify block=False the method becomes fire-and-forget, queueing your function to be executed and then allowing your code after the call to .execute_write_fn() to continue running while the underlying thread waits for an opportunity to run your function. A UUID representing the queued task will be returned. Any exceptions in your code will be silently swallowed.",736,
270,await db.execute_isolated_fn(fn),"This method works is similar to execute_write_fn() but executes the provided function in an entirely isolated SQLite connection, which is opened, used and then closed again in a single call to this method.
The prepare_connection() plugin hook is not executed against this connection.
This allows plugins to execute database operations that might conflict with how database connections are usually configured. For example, running a VACUUM operation while bypassing any restrictions placed by the datasette-sqlite-authorizer plugin.
Plugins can also use this method to load potentially dangerous SQLite extensions, use them to perform an operation and then have them safely unloaded at the end of the call, without risk of exposing them to other connections.
Functions run using execute_isolated_fn() share the same queue as execute_write_fn() , which guarantees that no writes can be executed at the same time as the isolated function is executing.
The return value of the function will be returned by this method. Any exceptions raised by the function will be raised out of the await line as well.",736,
271,db.close(),"Closes all of the open connections to file-backed databases. This is mainly intended to be used by large test suites, to avoid hitting limits on the number of open files.",736,
272,Database introspection,"The Database class also provides properties and methods for introspecting the database.
db.name - string
The name of the database - usually the filename without the .db prefix.
db.size - integer
The size of the database file in bytes. 0 for :memory: databases.
db.mtime_ns - integer or None
The last modification time of the database file in nanoseconds since the epoch. None for :memory: databases.
db.is_mutable - boolean
Is this database mutable, and allowed to accept writes?
db.is_memory - boolean
Is this database an in-memory database?
await db.attached_databases() - list of named tuples
Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields seq , name and file .
await db.table_exists(table) - boolean
Check if a table called table exists.
await db.view_exists(view) - boolean
Check if a view called view exists.
await db.table_names() - list of strings
List of names of tables in the database.
await db.view_names() - list of strings
List of names of views in the database.
await db.table_columns(table) - list of strings
Names of columns in a specific table.
await db.table_column_details(table) - list of named tuples
Full details of the columns in a specific table. Each column is represented by a Column named tuple with fields cid (integer representing the column position), name (string), type (string, e.g. REAL or VARCHAR(30) ), notnull (integer 1 or 0), default_value (string or None), is_pk (integer 1 or 0).
await db.primary_keys(table) - list of strings
Names of the columns that are part of the primary key for this table.
await db.fts_table(table) - string or None
The name of the FTS table associated with this table, if one exists.
await db.label_column_for_table(table) - string or None
The label column that is associated with this table - either automatically detected or using the ""label_column"" key from Metadata , see Specifying the label column for a table .
await db.foreign_keys_for_table(table) - list of dictionaries
Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this: {""column"": string, ""other_table"": string, ""other_column"": string} .
await db.hidden_table_names() - list of strings
List of tables which Datasette ""hides"" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the Hiding tables feature.
await db.get_table_definition(table) - string
Returns the SQL definition for the table - the CREATE TABLE statement and any associated CREATE INDEX statements.
await db.get_view_definition(view) - string
Returns the SQL definition of the named view.
await db.get_all_foreign_keys() - dictionary
Dictionary representing both incoming and outgoing foreign keys for this table. It has two keys, ""incoming"" and ""outgoing"" , each of which is a list of dictionaries with keys ""column"" , ""other_table"" and ""other_column"" . For example:
{
""incoming"": [],
""outgoing"": [
{
""other_table"": ""attraction_characteristic"",
""column"": ""characteristic_id"",
""other_column"": ""pk"",
},
{
""other_table"": ""roadside_attractions"",
""column"": ""attraction_id"",
""other_column"": ""pk"",
}
]
}",736,
273,CSRF protection,"Datasette uses asgi-csrf to guard against CSRF attacks on form POST submissions. Users receive a ds_csrftoken cookie which is compared against the csrftoken form field (or x-csrftoken HTTP header) for every incoming request.
If your plugin implements a