{"ok": true, "next": null, "rows": [{"id": "internals:bypassing-permission-checks", "page": "internals", "ref": "bypassing-permission-checks", "title": "Bypassing permission checks", "content": "All  datasette.client  methods accept an optional  skip_permission_checks=True  parameter. When set, all permission checks will be bypassed for that request, allowing access to any resource regardless of the configured permissions. \n                     This is useful for plugins and internal operations that need to access all resources without being subject to permission restrictions. \n                     Example usage: \n                     # Regular request - respects permissions\nresponse = await datasette.client.get(\n    \"/private-db/secret-table.json\"\n)\n# May return 403 Forbidden if access is denied\n\n# With skip_permission_checks - bypasses all permission checks\nresponse = await datasette.client.get(\n    \"/private-db/secret-table.json\",\n    skip_permission_checks=True,\n)\n# Will return 200 OK and the data, regardless of permissions \n                     This parameter works with all HTTP methods ( get ,  post ,  put ,  patch ,  delete ,  options ,  head ) and the generic  request  method. \n                     \n                         Use  skip_permission_checks=True  with caution. It completely bypasses Datasette's permission system and should only be used in trusted plugin code or internal operations where you need guaranteed access to resources.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"datasette.client\"]", "references": "[]"}, {"id": "internals:database-close", "page": "internals", "ref": "database-close", "title": "db.close()", "content": "Release all resources held by this  Database  instance. This shuts down the background write thread (if one was started by a previous call to  await db.execute_write_fn(fn, block=True, transaction=True)  or similar), closes the write connection, and closes any cached read connections. \n                 After  db.close()  has been called, any further call to  await db.execute(sql, ...) ,  await db.execute_fn(fn) ,  await db.execute_write(sql, params=None, block=True) ,  await db.execute_write_fn(fn, block=True, transaction=True) ,  await db.execute_write_many(sql, params_seq, block=True) ,  await db.execute_write_script(sql, block=True)  or  await db.execute_isolated_fn(fn)  will raise a  datasette.database.DatasetteClosedError  exception. \n                 close()  is idempotent \u2014 calling it a second time is a no-op. It is one-way: a closed  Database  cannot be reopened.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:database-constructor", "page": "internals", "ref": "database-constructor", "title": "Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None, is_temp_disk=False)", "content": "The  Database()  constructor can be used by plugins, in conjunction with  .add_database(db, name=None, route=None) , to create and register new databases. \n                 The arguments are as follows: \n                 \n                     \n                         ds  -  Datasette class  (required) \n                         \n                             The Datasette instance you are attaching this database to. \n                         \n                     \n                     \n                         path  - string \n                         \n                             Path to a SQLite database file on disk. \n                         \n                     \n                     \n                         is_mutable  - boolean \n                         \n                             Set this to  False  to cause Datasette to open the file in immutable mode. \n                         \n                     \n                     \n                         is_memory  - boolean \n                         \n                             Use this to create non-shared memory connections. \n                         \n                     \n                     \n                         memory_name  - string or  None \n                         \n                             Use this to create a named in-memory database. Unlike regular memory databases these can be accessed by multiple threads and will persist an changes made to them for the lifetime of the Datasette server process. \n                         \n                     \n                     \n                         is_temp_disk  - boolean \n                         \n                             Set this to  True  to create a temporary file-backed database. This creates a SQLite database in a temporary file on disk (using Python's  tempfile.mkstemp() ) with WAL mode enabled for better concurrent read/write performance. The temporary file is automatically cleaned up when the database is closed or when the process exits. \n                             Unlike named in-memory databases ( memory_name ), temporary disk databases support concurrent readers and writers without locking errors, because WAL mode allows readers and writers to operate simultaneously. This makes them suitable for use cases like the internal database where concurrent access is common. \n                             When  is_temp_disk=True , the  path ,  is_mutable , and  mode  parameters are set automatically and should not be provided. \n                         \n                     \n                 \n                 The first argument is the  datasette  instance you are attaching to, the second is a  path= , then  is_mutable  and  is_memory  are both optional arguments.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:database-execute", "page": "internals", "ref": "database-execute", "title": "await db.execute(sql, ...)", "content": "Executes a SQL query against the database and returns the resulting rows (see  Results ). \n                 \n                     \n                         sql  - string (required) \n                         \n                             The SQL query to execute. This can include  ?  or  :named  parameters. \n                         \n                     \n                     \n                         params  - list or dict \n                         \n                             A list or dictionary of values to use for the parameters. List for  ? , dictionary for  :named . \n                         \n                     \n                     \n                         truncate  - boolean \n                         \n                             Should the rows returned by the query be truncated at the maximum page size? Defaults to  True , set this to  False  to disable truncation. \n                         \n                     \n                     \n                         custom_time_limit  - integer ms \n                         \n                             A custom time limit for this query. This can be set to a lower value than the Datasette configured default. If a query takes longer than this it will be terminated early and raise a  dataette.database.QueryInterrupted  exception. \n                         \n                     \n                     \n                         page_size  - integer \n                         \n                             Set a custom page size for truncation, over-riding the configured Datasette default. \n                         \n                     \n                     \n                         log_sql_errors  - boolean \n                         \n                             Should any SQL errors be logged to the console in addition to being raised as an error? Defaults to  True .", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:database-execute-fn", "page": "internals", "ref": "database-execute-fn", "title": "await db.execute_fn(fn)", "content": "Executes a given callback function against a read-only database connection running in a thread. The function will be passed a SQLite connection, and the return value from the function will be returned by the  await . \n                 Example usage: \n                 def get_version(conn):\n    return conn.execute(\n        \"select sqlite_version()\"\n    ).fetchall()[0][0]\n\n\nversion = await db.execute_fn(get_version)", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:database-execute-isolated-fn", "page": "internals", "ref": "database-execute-isolated-fn", "title": "await db.execute_isolated_fn(fn)", "content": "This method works is similar to  execute_write_fn()  but executes the provided function in an entirely isolated SQLite connection, which is opened, used and then closed again in a single call to this method. \n                 The  prepare_connection()  plugin hook is not executed against this connection. \n                 This allows plugins to execute database operations that might conflict with how database connections are usually configured. For example, running a  VACUUM  operation while bypassing any restrictions placed by the  datasette-sqlite-authorizer  plugin. \n                 Running  VACUUM  using this method also ensures it won't trigger incorrect  RenameTableEvent  events, since  execute_isolated_fn()  does not trigger the Datasette mechanism that detects renamed tables in a way that can be confused by a  VACUUM . \n                 Plugins can also use this method to load potentially dangerous SQLite extensions, use them to perform an operation and then have them safely unloaded at the end of the call, without risk of exposing them to other connections. \n                 Functions run using  execute_isolated_fn()  share the same queue as  execute_write_fn() , which guarantees that no writes can be executed at the same time as the isolated function is executing. \n                 The return value of the function will be returned by this method. Any exceptions raised by the function will be raised out of the  await  line as well.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://github.com/datasette/datasette-sqlite-authorizer\", \"label\": \"datasette-sqlite-authorizer\"}]"}, {"id": "internals:database-execute-write", "page": "internals", "ref": "database-execute-write", "title": "await db.execute_write(sql, params=None, block=True)", "content": "SQLite only allows one database connection to write at a time. Datasette handles this for you by maintaining a queue of writes to be executed against a given database. Plugins can submit write operations to this queue and they will be executed in the order in which they are received. \n                 This method can be used to queue up a non-SELECT SQL query to be executed against a single write connection to the database. \n                 You can pass additional SQL parameters as a tuple or dictionary. \n                 The method will block until the operation is completed, and the return value will be the return from calling  conn.execute(...)  using the underlying  sqlite3  Python library. \n                 If you pass  block=False  this behavior changes to \"fire and forget\" - queries will be added to the write queue and executed in a separate thread while your code can continue to do other things. The method will return a UUID representing the queued task. \n                 Each call to  execute_write()  will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:database-execute-write-fn", "page": "internals", "ref": "database-execute-write-fn", "title": "await db.execute_write_fn(fn, block=True, transaction=True)", "content": "This method works like  .execute_write() , but instead of a SQL statement you give it a callable Python function. Your function will be queued up and then called when the write connection is available, passing that connection as the argument to the function. \n                 The function can then perform multiple actions, safe in the knowledge that it has exclusive access to the single writable connection for as long as it is executing. \n                 \n                     fn  needs to be a regular function, not an  async def  function. \n                 \n                 For example: \n                 def delete_and_return_count(conn):\n    conn.execute(\"delete from some_table where id > 5\")\n    return conn.execute(\n        \"select count(*) from some_table\"\n    ).fetchone()[0]\n\n\ntry:\n    num_rows_left = await database.execute_write_fn(\n        delete_and_return_count\n    )\nexcept Exception as e:\n    print(\"An error occurred:\", e) \n                 Your function can optionally accept a  track_event  parameter in addition to  conn .  If it does, it will be passed a callable that can be used to queue events for dispatch after the write transaction commits successfully.  Events queued this way are discarded if the write raises an exception. \n                 from datasette.events import AlterTableEvent\n\n\ndef my_write(conn, track_event):\n    before_schema = conn.execute(\n        \"select sql from sqlite_master where name = 'my_table'\"\n    ).fetchone()[0]\n    conn.execute(\n        \"alter table my_table add column new_col text\"\n    )\n    after_schema = conn.execute(\n        \"select sql from sqlite_master where name = 'my_table'\"\n    ).fetchone()[0]\n    track_event(\n        AlterTableEvent(\n            actor=None,\n            database=\"mydb\",\n            table=\"my_table\",\n            before_schema=before_schema,\n            after_schema=after_schema,\n        )\n    )\n\n\nawait database.execute_write_fn(my_write) \n                 The value returned from  await database.execute_write_fn(...)  will be the return value from your function. \n                 If your function raises an exception that exception will be propagated up to the  await  line. \n                 By default your function will be executed inside a transaction. You can pass  transaction=False  to disable this behavior, though if you do that you should be careful to manually apply transactions - ideally using the  with conn:  pattern, or you may see  OperationalError: database table is locked  errors. \n                 If you specify  block=False  the method becomes fire-and-forget, queueing your function to be executed and then allowing your code after the call to  .execute_write_fn()  to continue running while the underlying thread waits for an opportunity to run your function. A UUID representing the queued task will be returned. Any exceptions in your code will be silently swallowed.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:database-execute-write-many", "page": "internals", "ref": "database-execute-write-many", "title": "await db.execute_write_many(sql, params_seq, block=True)", "content": "Like  execute_write()  but uses the  sqlite3   conn.executemany()  method. This will efficiently execute the same SQL statement against each of the parameters in the  params_seq  iterator, for example: \n                 await db.execute_write_many(\n    \"insert into characters (id, name) values (?, ?)\",\n    [(1, \"Melanie\"), (2, \"Selma\"), (2, \"Viktor\")],\n) \n                 Each call to  execute_write_many()  will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executemany\", \"label\": \"conn.executemany()\"}]"}, {"id": "internals:database-execute-write-script", "page": "internals", "ref": "database-execute-write-script", "title": "await db.execute_write_script(sql, block=True)", "content": "Like  execute_write()  but can be used to send multiple SQL statements in a single string separated by semicolons, using the  sqlite3   conn.executescript()  method. \n                 Each call to  execute_write_script()  will be executed inside a transaction.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#sqlite3.Cursor.executescript\", \"label\": \"conn.executescript()\"}]"}, {"id": "internals:database-hash", "page": "internals", "ref": "database-hash", "title": "db.hash", "content": "If the database was opened in immutable mode, this property returns the 64 character SHA-256 hash of the database contents as a string. Otherwise it returns  None .", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:database-results", "page": "internals", "ref": "database-results", "title": "Results", "content": "The  db.execute()  method returns a single  Results  object. This can be used to access the rows returned by the query. \n                 Iterating over a  Results  object will yield SQLite  Row objects . Each of these can be treated as a tuple or can be accessed using  row[\"column\"]  syntax: \n                 info = []\nresults = await db.execute(\"select name from sqlite_master\")\nfor row in results:\n    info.append(row[\"name\"]) \n                 The  Results  object also has the following properties and methods: \n                 \n                     \n                         .truncated  - boolean \n                         \n                             Indicates if this query was truncated - if it returned more results than the specified  page_size . If this is true then the results object will only provide access to the first  page_size  rows in the query result. You can disable truncation by passing  truncate=False  to the  db.query()  method. \n                         \n                     \n                     \n                         .columns  - list of strings \n                         \n                             A list of column names returned by the query. \n                         \n                     \n                     \n                         .rows  - list of  sqlite3.Row \n                         \n                             This property provides direct access to the list of rows returned by the database. You can access specific rows by index using  results.rows[0] . \n                         \n                     \n                     \n                         .dicts()  - list of  dict \n                         \n                             This method returns a list of Python dictionaries, one for each row. \n                         \n                     \n                     \n                         .first()  - row or None \n                         \n                             Returns the first row in the results, or  None  if no rows were returned. \n                         \n                     \n                     \n                         .single_value() \n                         \n                             Returns the value of the first column of the first row of results - but only if the query returned a single row with a single column. Raises a  datasette.database.MultipleValues  exception otherwise. \n                         \n                     \n                     \n                         .__len__() \n                         \n                             Calling  len(results)  returns the (truncated) number of returned results.", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[{\"href\": \"https://docs.python.org/3/library/sqlite3.html#row-objects\", \"label\": \"Row objects\"}]"}, {"id": "internals:datasette-absolute-url", "page": "internals", "ref": "datasette-absolute-url", "title": ".absolute_url(request, path)", "content": "request  - Request \n                         \n                             The current Request object \n                         \n                     \n                     \n                         path  - string \n                         \n                             A path, for example  /dbname/table.json \n                         \n                     \n                 \n                 Returns the absolute URL for the given path, including the protocol and host. For example: \n                 absolute_url = datasette.absolute_url(\n    request, \"/dbname/table.json\"\n)\n# Would return \"http://localhost:8001/dbname/table.json\" \n                 The current request object is used to determine the hostname and protocol that should be used for the returned URL. The  force_https_urls  configuration setting is taken into account.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-actions", "page": "internals", "ref": "datasette-actions", "title": ".actions", "content": "Property exposing a dictionary of actions that have been registered using the  register_actions(datasette)  plugin hook. \n                 The dictionary keys are the action names - e.g.  view-instance  - and the values are  Action()  objects describing the permission.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-actors-from-ids", "page": "internals", "ref": "datasette-actors-from-ids", "title": "await .actors_from_ids(actor_ids)", "content": "actor_ids  - list of strings or integers \n                         \n                             A list of actor IDs to look up. \n                         \n                     \n                 \n                 Returns a dictionary, where the keys are the IDs passed to it and the values are the corresponding actor dictionaries. \n                 This method is mainly designed to be used with plugins. See the  actors_from_ids(datasette, actor_ids)  documentation for details. \n                 If no plugins that implement that hook are installed, the default return value looks like this: \n                 {\n    \"1\": {\"id\": \"1\"},\n    \"2\": {\"id\": \"2\"}\n}", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-add-database", "page": "internals", "ref": "datasette-add-database", "title": ".add_database(db, name=None, route=None)", "content": "db  - datasette.database.Database instance \n                         \n                             The database to be attached. \n                         \n                     \n                     \n                         name  - string, optional \n                         \n                             The name to be used for this database . If not specified Datasette will pick one based on the filename or memory name. \n                         \n                     \n                     \n                         route  - string, optional \n                         \n                             This will be used in the URL path. If not specified, it will default to the same thing as the  name . \n                         \n                     \n                 \n                 The  datasette.add_database(db)  method lets you add a new database to the current Datasette instance. \n                 The  db  parameter should be an instance of the  datasette.database.Database  class. For example: \n                 from datasette.database import Database\n\ndatasette.add_database(\n    Database(\n        datasette,\n        path=\"path/to/my-new-database.db\",\n    )\n) \n                 This will add a mutable database and serve it at  /my-new-database . \n                 Use  is_mutable=False  to add an immutable database. \n                 .add_database()  returns the Database instance, with its name set as the  database.name  attribute. Any time you are working with a newly added database you should use the return value of  .add_database() , for example: \n                 db = datasette.add_database(\n    Database(datasette, memory_name=\"statistics\")\n)\nawait db.execute_write(\n    \"CREATE TABLE foo(id integer primary key)\"\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-add-memory-database", "page": "internals", "ref": "datasette-add-memory-database", "title": ".add_memory_database(memory_name, name=None, route=None)", "content": "Adds a shared in-memory database with the specified name: \n                 datasette.add_memory_database(\"statistics\") \n                 This is a shortcut for the following: \n                 from datasette.database import Database\n\ndatasette.add_database(\n    Database(datasette, memory_name=\"statistics\")\n) \n                 Using either of these patterns will result in the in-memory database being served at  /statistics . \n                 The  name  and  route  parameters are optional and work the same way as they do for  .add_database(db, name=None, route=None) .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-add-message", "page": "internals", "ref": "datasette-add-message", "title": ".add_message(request, message, type=datasette.INFO)", "content": "request  - Request \n                         \n                             The current Request object \n                         \n                     \n                     \n                         message  - string \n                         \n                             The message string \n                         \n                     \n                     \n                         type  - constant, optional \n                         \n                             The message type -  datasette.INFO ,  datasette.WARNING  or  datasette.ERROR \n                         \n                     \n                 \n                 Datasette's flash messaging mechanism allows you to add a message that will be displayed to the user on the next page that they visit. Messages are persisted in a  ds_messages  cookie. This method adds a message to that cookie. \n                 You can try out these messages (including the different visual styling of the three message types) using the  /-/messages  debugging tool.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-allowed", "page": "internals", "ref": "datasette-allowed", "title": "await .allowed(*, action, resource, actor=None)", "content": "action  - string \n                         \n                             The name of the action that is being permission checked. \n                         \n                     \n                     \n                         resource  - Resource object \n                         \n                             A Resource object representing the database, table, or other resource. Must be an instance of a Resource class such as  TableResource ,  DatabaseResource ,  QueryResource , or  InstanceResource . \n                         \n                     \n                     \n                         actor  - dictionary, optional \n                         \n                             The authenticated actor. This is usually  request.actor . Defaults to  None  for unauthenticated requests. \n                         \n                     \n                 \n                 This method checks if the given actor has permission to perform the given action on the given resource. All parameters must be passed as keyword arguments. \n                 Example usage: \n                 from datasette.resources import (\n    TableResource,\n    DatabaseResource,\n)\n\n# Check if actor can view a specific table\ncan_view = await datasette.allowed(\n    action=\"view-table\",\n    resource=TableResource(\n        database=\"fixtures\", table=\"facetable\"\n    ),\n    actor=request.actor,\n)\n\n# Check if actor can execute SQL on a database\ncan_execute = await datasette.allowed(\n    action=\"execute-sql\",\n    resource=DatabaseResource(database=\"fixtures\"),\n    actor=request.actor,\n) \n                 The method returns  True  if the permission is granted,  False  if denied.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-allowed-resources", "page": "internals", "ref": "datasette-allowed-resources", "title": "await .allowed_resources(action, actor=None, *, parent=None, include_is_private=False, include_reasons=False, limit=100, next=None)", "content": "Returns a  PaginatedResources  object containing resources that the actor can access for the specified action, with support for keyset pagination. \n                 \n                     \n                         action  - string \n                         \n                             The action name (e.g., \"view-table\", \"view-database\") \n                         \n                     \n                     \n                         actor  - dictionary, optional \n                         \n                             The authenticated actor. Defaults to  None  for unauthenticated requests. \n                         \n                     \n                     \n                         parent  - string, optional \n                         \n                             Optional parent filter (e.g., database name) to limit results \n                         \n                     \n                     \n                         include_is_private  - boolean, optional \n                         \n                             If True, adds a  .private  attribute to each Resource indicating whether anonymous users can access it \n                         \n                     \n                     \n                         include_reasons  - boolean, optional \n                         \n                             If True, adds a  .reasons  attribute with a list of strings describing why access was granted (useful for debugging) \n                         \n                     \n                     \n                         limit  - integer, optional \n                         \n                             Maximum number of results to return per page (1-1000, default 100) \n                         \n                     \n                     \n                         next  - string, optional \n                         \n                             Keyset token from a previous page for pagination \n                         \n                     \n                 \n                 The method returns a  PaginatedResources  object (from  datasette.utils ) with the following attributes: \n                 \n                     \n                         resources  - list \n                         \n                             List of  Resource  objects for the current page \n                         \n                     \n                     \n                         next  - string or None \n                         \n                             Token for the next page, or  None  if no more results exist \n                         \n                     \n                 \n                 Example usage: \n                 # Get first page of tables\npage = await datasette.allowed_resources(\n    \"view-table\",\n    actor=request.actor,\n    parent=\"fixtures\",\n    limit=50,\n)\n\nfor table in page.resources:\n    print(table.parent, table.child)\n    if hasattr(table, \"private\"):\n        print(f\"  Private: {table.private}\")\n\n# Get next page if available\nif page.next:\n    next_page = await datasette.allowed_resources(\n        \"view-table\", actor=request.actor, next=page.next\n    )\n\n# Iterate through all results automatically\npage = await datasette.allowed_resources(\n    \"view-table\", actor=request.actor\n)\nasync for table in page.all():\n    print(table.parent, table.child)\n\n# With reasons for debugging\npage = await datasette.allowed_resources(\n    \"view-table\", actor=request.actor, include_reasons=True\n)\nfor table in page.resources:\n    print(f\"{table.child}: {table.reasons}\") \n                 The  page.all()  async generator automatically handles pagination, fetching additional pages and yielding all resources one at a time. \n                 This method uses  await .allowed_resources_sql(*, action, actor=None, parent=None, include_is_private=False)  under the hood and is an efficient way to list the databases, tables or other resources that an actor can access for a specific action.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-allowed-resources-sql", "page": "internals", "ref": "datasette-allowed-resources-sql", "title": "await .allowed_resources_sql(*, action, actor=None, parent=None, include_is_private=False)", "content": "Builds the SQL query that Datasette uses to determine which resources an actor may access for a specific action. Returns a  (sql: str, params: dict)  namedtuple that can be executed against the internal  catalog_*  database tables.  parent  can be used to limit results to a specific database, and  include_is_private  adds a column indicating whether anonymous users would be denied access to that resource. \n                 Plugins that need to execute custom analysis over the raw allow/deny rules can use this helper to run the same query that powers the  /-/allowed  debugging interface. \n                 The SQL query built by this method will return the following columns: \n                 \n                     \n                         parent : The parent resource identifier (or NULL) \n                     \n                     \n                         child : The child resource identifier (or NULL) \n                     \n                     \n                         reason : The reason from the rule that granted access \n                     \n                     \n                         is_private : (if  include_is_private ) 1 if anonymous users cannot access, 0 otherwise", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-check-visibility", "page": "internals", "ref": "datasette-check-visibility", "title": "await .check_visibility(actor, action, resource=None)", "content": "actor  - dictionary \n                         \n                             The authenticated actor. This is usually  request.actor . \n                         \n                     \n                     \n                         action  - string \n                         \n                             The name of the action that is being permission checked. \n                         \n                     \n                     \n                         resource  - Resource object, optional \n                         \n                             The resource being checked, as a Resource object such as  DatabaseResource(database=...) ,  TableResource(database=..., table=...) , or  QueryResource(database=..., query=...) . Only some permissions apply to a resource. \n                         \n                     \n                 \n                 This convenience method can be used to answer the question \"should this item be considered private, in that it is visible to me but it is not visible to anonymous users?\" \n                 It returns a tuple of two booleans,  (visible, private) .  visible  indicates if the actor can see this resource.  private  will be  True  if an anonymous user would not be able to view the resource. \n                 This example checks if the user can access a specific table, and sets  private  so that a padlock icon can later be displayed: \n                 from datasette.resources import TableResource\n\nvisible, private = await datasette.check_visibility(\n    request.actor,\n    action=\"view-table\",\n    resource=TableResource(database=database, table=table),\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-close", "page": "internals", "ref": "datasette-close", "title": ".close()", "content": "Release all resources held by this  Datasette  instance. This calls  db.close()  on every attached database (including the internal database), shuts down the thread pool executor used to run SQL queries, and unlinks the temporary file used to back the internal database if one was created. \n                 close()  is synchronous, idempotent and one-way: after a call to  close()  any attempt to use the Datasette instance to execute SQL will raise a  datasette.database.DatasetteClosedError  exception. A closed  Datasette  cannot be reopened \u2014 callers that need a fresh instance should construct a new one. \n                 If a call to  Database.close()  on one of the attached databases raises an exception,  Datasette.close()  will continue trying to close the remaining databases and will re-raise the first exception after every database has been processed. \n                 When Datasette is being served over ASGI the  close()  method is wired up to the lifespan shutdown event, so resources are released cleanly on  SIGTERM  /  SIGINT .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-column-types", "page": "internals", "ref": "datasette-column-types", "title": "Column types", "content": "Column types are stored in the  column_types  table in the  internal database . The following methods provide the API for reading and modifying column type assignments.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-create-token", "page": "internals", "ref": "datasette-create-token", "title": "await .create_token(actor_id, expires_after=None, restrictions=None, handler=None)", "content": "actor_id  - string \n                         \n                             The ID of the actor to create a token for. \n                         \n                     \n                     \n                         expires_after  - int, optional \n                         \n                             The number of seconds after which the token should expire. \n                         \n                     \n                     \n                         restrictions  -  TokenRestrictions , optional \n                         \n                             A  TokenRestrictions  object limiting which actions the token can perform. \n                         \n                     \n                     \n                         handler  - string, optional \n                         \n                             The name of a specific token handler to use. If omitted, the first registered handler is used. See  register_token_handler(datasette) . \n                         \n                     \n                 \n                 This is an  async  method that returns an  API token  string which can be used to authenticate requests to the Datasette API. The default  SignedTokenHandler  returns tokens of the format  dstok_... . \n                 All tokens must have an  actor_id  string indicating the ID of the actor which the token will act on behalf of. \n                 Tokens default to lasting forever, but can be set to expire after a given number of seconds using the  expires_after  argument. The following code creates a token for  user1  that will expire after an hour: \n                 token = await datasette.create_token(\n    actor_id=\"user1\",\n    expires_after=3600,\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-databases", "page": "internals", "ref": "datasette-databases", "title": ".databases", "content": "Property exposing a  collections.OrderedDict  of databases currently connected to Datasette. \n                 The dictionary keys are the name of the database that is used in the URL - e.g.  /fixtures  would have a key of  \"fixtures\" . The values are  Database class  instances. \n                 All databases are listed, irrespective of user permissions.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-ensure-permission", "page": "internals", "ref": "datasette-ensure-permission", "title": "await .ensure_permission(action, resource=None, actor=None)", "content": "action  - string \n                         \n                             The action to check. See  Built-in actions  for a list of available actions. \n                         \n                     \n                     \n                         resource  - Resource object (optional) \n                         \n                             The resource to check the permission against. Must be an instance of  InstanceResource ,  DatabaseResource , or  TableResource  from the  datasette.resources  module. If omitted, defaults to  InstanceResource()  for instance-level permissions. \n                         \n                     \n                     \n                         actor  - dictionary (optional) \n                         \n                             The authenticated actor. This is usually  request.actor . \n                         \n                     \n                 \n                 This is a convenience wrapper around  await .allowed(*, action, resource, actor=None)  that raises a  datasette.Forbidden  exception if the permission check fails. Use this when you want to enforce a permission check and halt execution if the actor is not authorized. \n                 Example: \n                 from datasette.resources import TableResource\n\n# Will raise Forbidden if actor cannot view the table\nawait datasette.ensure_permission(\n    action=\"view-table\",\n    resource=TableResource(\n        database=\"fixtures\", table=\"cities\"\n    ),\n    actor=request.actor,\n)\n\n# For instance-level actions, resource can be omitted:\nawait datasette.ensure_permission(\n    action=\"permissions-debug\", actor=request.actor\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-get-column-metadata", "page": "internals", "ref": "datasette-get-column-metadata", "title": "await .get_column_metadata(self, database_name, resource_name, column_name)", "content": "database_name  - string \n                             \n                                 The name of the database to query. \n                             \n                         \n                         \n                             resource_name  - string \n                             \n                                 The name of the resource (table, view, or canned query) inside  database_name  to query. \n                             \n                         \n                         \n                             column_name  - string \n                             \n                                 The name of the column inside  resource_name  to query. \n                             \n                         \n                     \n                     Returns metadata keys and values for the specified column, resource, and table as a dictionary.\n                        Internally queries the  metadata_columns  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-get-column-type", "page": "internals", "ref": "datasette-get-column-type", "title": "await .get_column_type(database, resource, column)", "content": "database  - string \n                             \n                                 The name of the database. \n                             \n                         \n                         \n                             resource  - string \n                             \n                                 The name of the table or view. \n                             \n                         \n                         \n                             column  - string \n                             \n                                 The name of the column. \n                             \n                         \n                     \n                     Returns a  ColumnType  subclass instance with  .config  populated for the specified column, or  None  if no column type is assigned. \n                     ct = await datasette.get_column_type(\n    \"mydb\", \"mytable\", \"email_col\"\n)\nif ct:\n    print(ct.name)  # \"email\"\n    print(ct.config)  # None or {...}", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Column types\"]", "references": "[]"}, {"id": "internals:datasette-get-column-types", "page": "internals", "ref": "datasette-get-column-types", "title": "await .get_column_types(database, resource)", "content": "database  - string \n                             \n                                 The name of the database. \n                             \n                         \n                         \n                             resource  - string \n                             \n                                 The name of the table or view. \n                             \n                         \n                     \n                     Returns a dictionary mapping column names to  ColumnType  subclass instances (with  .config  populated) for all columns that have assigned types on the given resource. \n                     ct_map = await datasette.get_column_types(\"mydb\", \"mytable\")\nfor col_name, ct in ct_map.items():\n    print(col_name, ct.name, ct.config)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Column types\"]", "references": "[]"}, {"id": "internals:datasette-get-database", "page": "internals", "ref": "datasette-get-database", "title": ".get_database(name)", "content": "name  - string, optional \n                         \n                             The name of the database - optional. \n                         \n                     \n                 \n                 Returns the specified database object. Raises a  KeyError  if the database does not exist. Call this method without an argument to return the first connected database.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-get-database-metadata", "page": "internals", "ref": "datasette-get-database-metadata", "title": "await .get_database_metadata(self, database_name)", "content": "database_name  - string \n                             \n                                 The name of the database to query. \n                             \n                         \n                     \n                     Returns metadata keys and values for the specified database as a dictionary.\n                        Internally queries the  metadata_databases  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-get-instance-metadata", "page": "internals", "ref": "datasette-get-instance-metadata", "title": "await .get_instance_metadata(self)", "content": "Returns metadata keys and values for the entire Datasette instance as a dictionary.\n                        Internally queries the  metadata_instance  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-get-resource-metadata", "page": "internals", "ref": "datasette-get-resource-metadata", "title": "await .get_resource_metadata(self, database_name, resource_name)", "content": "database_name  - string \n                             \n                                 The name of the database to query. \n                             \n                         \n                         \n                             resource_name  - string \n                             \n                                 The name of the resource (table, view, or canned query) inside  database_name  to query. \n                             \n                         \n                     \n                     Returns metadata keys and values for the specified \"resource\" as a dictionary.\n                        A \"resource\" in this context can be a table, view, or canned query.\n                        Internally queries the  metadata_resources  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-get-set-metadata", "page": "internals", "ref": "datasette-get-set-metadata", "title": "Getting and setting metadata", "content": "Metadata about the instance, databases, tables and columns is stored in tables in  Datasette's internal database . The following methods are the supported API for plugins to read and update that stored metadata.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-plugin-config", "page": "internals", "ref": "datasette-plugin-config", "title": ".plugin_config(plugin_name, database=None, table=None)", "content": "plugin_name  - string \n                         \n                             The name of the plugin to look up configuration for. Usually this is something similar to  datasette-cluster-map . \n                         \n                     \n                     \n                         database  - None or string \n                         \n                             The database the user is interacting with. \n                         \n                     \n                     \n                         table  - None or string \n                         \n                             The table the user is interacting with. \n                         \n                     \n                 \n                 This method lets you read plugin configuration values that were set in   datasette.yaml . See  Writing plugins that accept configuration  for full details of how this method should be used. \n                 The return value will be the value from the configuration file - usually a dictionary. \n                 If the plugin is not configured the return value will be  None .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-remove-column-type", "page": "internals", "ref": "datasette-remove-column-type", "title": "await .remove_column_type(database, resource, column)", "content": "database  - string \n                             \n                                 The name of the database. \n                             \n                         \n                         \n                             resource  - string \n                             \n                                 The name of the table or view. \n                             \n                         \n                         \n                             column  - string \n                             \n                                 The name of the column. \n                             \n                         \n                     \n                     Removes the column type assignment for the specified column. \n                     await datasette.remove_column_type(\n    \"mydb\", \"mytable\", \"location\"\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Column types\"]", "references": "[]"}, {"id": "internals:datasette-remove-database", "page": "internals", "ref": "datasette-remove-database", "title": ".remove_database(name)", "content": "name  - string \n                         \n                             The name of the database to be removed. \n                         \n                     \n                 \n                 This removes a database that has been previously added.  name=  is the unique name of that database.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-render-template", "page": "internals", "ref": "datasette-render-template", "title": "await .render_template(template, context=None, request=None)", "content": "template  - string, list of strings or jinja2.Template \n                         \n                             The template file to be rendered, e.g.  my_plugin.html . Datasette will search for this file first in the  --template-dir=  location, if it was specified - then in the plugin's bundled templates and finally in Datasette's set of default templates. \n                             If this is a list of template file names then the first one that exists will be loaded and rendered. \n                             If this is a Jinja  Template object  it will be used directly. \n                         \n                     \n                     \n                         context  - None or a Python dictionary \n                         \n                             The context variables to pass to the template. \n                         \n                     \n                     \n                         request  - request object or None \n                         \n                             If you pass a Datasette request object here it will be made available to the template. \n                         \n                     \n                 \n                 Renders a  Jinja template  using Datasette's preconfigured instance of Jinja and returns the resulting string. The template will have access to Datasette's default template functions and any functions that have been made available by other plugins.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[{\"href\": \"https://jinja.palletsprojects.com/en/2.11.x/api/#jinja2.Template\", \"label\": \"Template object\"}, {\"href\": \"https://jinja.palletsprojects.com/en/2.11.x/\", \"label\": \"Jinja template\"}]"}, {"id": "internals:datasette-resolve-database", "page": "internals", "ref": "datasette-resolve-database", "title": ".resolve_database(request)", "content": "request  -  Request object \n                         \n                             A request object \n                         \n                     \n                 \n                 If you are implementing your own custom views, you may need to resolve the database that the user is requesting based on a URL path. If the regular expression for your route declares a  database  named group, you can use this method to resolve the database object. \n                 This returns a  Database  instance. \n                 If the database cannot be found, it raises a  datasette.utils.asgi.DatabaseNotFound  exception - which is a subclass of  datasette.utils.asgi.NotFound  with a  .database_name  attribute set to the name of the database that was requested.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-resolve-row", "page": "internals", "ref": "datasette-resolve-row", "title": ".resolve_row(request)", "content": "request  -  Request object \n                         \n                             A request object \n                         \n                     \n                 \n                 This method assumes your route declares named groups for  database ,  table  and  pks . \n                 It returns a  ResolvedRow  named tuple instance with the following fields: \n                 \n                     \n                         db  -  Database \n                         \n                             The database object \n                         \n                     \n                     \n                         table  - string \n                         \n                             The name of the table \n                         \n                     \n                     \n                         sql  - string \n                         \n                             SQL snippet that can be used in a  WHERE  clause to select the row \n                         \n                     \n                     \n                         params  - dict \n                         \n                             Parameters that should be passed to the SQL query \n                         \n                     \n                     \n                         pks  - list \n                         \n                             List of primary key column names \n                         \n                     \n                     \n                         pk_values  - list \n                         \n                             List of primary key values decoded from the URL \n                         \n                     \n                     \n                         row  -  sqlite3.Row \n                         \n                             The row itself \n                         \n                     \n                 \n                 If the database or table cannot be found it raises a  datasette.utils.asgi.DatabaseNotFound  exception. \n                 If the table does not exist it raises a  datasette.utils.asgi.TableNotFound  exception. \n                 If the row cannot be found it raises a  datasette.utils.asgi.RowNotFound  exception. This has  .database_name ,  .table  and  .pk_values  attributes, extracted from the request path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-resolve-table", "page": "internals", "ref": "datasette-resolve-table", "title": ".resolve_table(request)", "content": "request  -  Request object \n                         \n                             A request object \n                         \n                     \n                 \n                 This assumes that the regular expression for your route declares both a  database  and a  table  named group. \n                 It returns a  ResolvedTable  named tuple instance with the following fields: \n                 \n                     \n                         db  -  Database \n                         \n                             The database object \n                         \n                     \n                     \n                         table  - string \n                         \n                             The name of the table (or view) \n                         \n                     \n                     \n                         is_view  - boolean \n                         \n                             True  if this is a view,  False  if it is a table \n                         \n                     \n                 \n                 If the database or table cannot be found it raises a  datasette.utils.asgi.DatabaseNotFound  exception. \n                 If the table does not exist it raises a  datasette.utils.asgi.TableNotFound  exception - a subclass of  datasette.utils.asgi.NotFound  with  .database_name  and  .table  attributes.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-set-column-metadata", "page": "internals", "ref": "datasette-set-column-metadata", "title": "await .set_column_metadata(self, database_name, resource_name, column_name, key, value)", "content": "database_name  - string \n                             \n                                 The database the metadata entry belongs to. \n                             \n                         \n                         \n                             resource_name  - string \n                             \n                                 The resource (table, view, or canned query) the metadata entry belongs to. \n                             \n                         \n                         \n                             column-name  - string \n                             \n                                 The column the metadata entry belongs to. \n                             \n                         \n                         \n                             key  - string \n                             \n                                 The metadata entry key to insert (ex  title ,  description , etc.) \n                             \n                         \n                         \n                             value  - string \n                             \n                                 The value of the metadata entry to insert. \n                             \n                         \n                     \n                     Adds a new metadata entry for the specified column.\n                        Any previous column-level metadata entry with the same  key  will be overwritten.\n                        Internally upserts the value into the  the  metadata_columns  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-set-column-type", "page": "internals", "ref": "datasette-set-column-type", "title": "await .set_column_type(database, resource, column, column_type, config=None)", "content": "database  - string \n                             \n                                 The name of the database. \n                             \n                         \n                         \n                             resource  - string \n                             \n                                 The name of the table or view. \n                             \n                         \n                         \n                             column  - string \n                             \n                                 The name of the column. \n                             \n                         \n                         \n                             column_type  - string \n                             \n                                 The column type name to assign, e.g.  \"email\" . \n                             \n                         \n                         \n                             config  - dict, optional \n                             \n                                 Optional configuration dict for the column type. \n                             \n                         \n                     \n                     Assigns a column type to a column. Overwrites any existing assignment for that column.\n                        Raises  ValueError  if the column type declares  sqlite_types  and the target column does not match one of those SQLite types. \n                     await datasette.set_column_type(\n    \"mydb\",\n    \"mytable\",\n    \"location\",\n    \"point\",\n    config={\"srid\": 4326},\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Column types\"]", "references": "[]"}, {"id": "internals:datasette-set-database-metadata", "page": "internals", "ref": "datasette-set-database-metadata", "title": "await .set_database_metadata(self, database_name, key, value)", "content": "database_name  - string \n                             \n                                 The database the metadata entry belongs to. \n                             \n                         \n                         \n                             key  - string \n                             \n                                 The metadata entry key to insert (ex  title ,  description , etc.) \n                             \n                         \n                         \n                             value  - string \n                             \n                                 The value of the metadata entry to insert. \n                             \n                         \n                     \n                     Adds a new metadata entry for the specified database.\n                        Any previous database-level metadata entry with the same  key  will be overwritten.\n                        Internally upserts the value into the  the  metadata_databases  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-set-instance-metadata", "page": "internals", "ref": "datasette-set-instance-metadata", "title": "await .set_instance_metadata(self, key, value)", "content": "key  - string \n                             \n                                 The metadata entry key to insert (ex  title ,  description , etc.) \n                             \n                         \n                         \n                             value  - string \n                             \n                                 The value of the metadata entry to insert. \n                             \n                         \n                     \n                     Adds a new metadata entry for the entire Datasette instance.\n                        Any previous instance-level metadata entry with the same  key  will be overwritten.\n                        Internally upserts the value into the  the  metadata_instance  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-set-resource-metadata", "page": "internals", "ref": "datasette-set-resource-metadata", "title": "await .set_resource_metadata(self, database_name, resource_name, key, value)", "content": "database_name  - string \n                             \n                                 The database the metadata entry belongs to. \n                             \n                         \n                         \n                             resource_name  - string \n                             \n                                 The resource (table, view, or canned query) the metadata entry belongs to. \n                             \n                         \n                         \n                             key  - string \n                             \n                                 The metadata entry key to insert (ex  title ,  description , etc.) \n                             \n                         \n                         \n                             value  - string \n                             \n                                 The value of the metadata entry to insert. \n                             \n                         \n                     \n                     Adds a new metadata entry for the specified \"resource\".\n                        Any previous resource-level metadata entry with the same  key  will be overwritten.\n                        Internally upserts the value into the  the  metadata_resources  table inside the  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"Getting and setting metadata\"]", "references": "[]"}, {"id": "internals:datasette-setting", "page": "internals", "ref": "datasette-setting", "title": ".setting(key)", "content": "key  - string \n                         \n                             The name of the setting, e.g.  base_url . \n                         \n                     \n                 \n                 Returns the configured value for the specified  setting . This can be a string, boolean or integer depending on the requested setting. \n                 For example: \n                 downloads_are_allowed = datasette.setting(\"allow_download\")", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-sign", "page": "internals", "ref": "datasette-sign", "title": ".sign(value, namespace=\"default\")", "content": "value  - any serializable type \n                         \n                             The value to be signed. \n                         \n                     \n                     \n                         namespace  - string, optional \n                         \n                             An alternative namespace, see the  itsdangerous salt documentation . \n                         \n                     \n                 \n                 Utility method for signing values, such that you can safely pass data to and from an untrusted environment. This is a wrapper around the  itsdangerous  library. \n                 This method returns a signed string, which can be decoded and verified using  .unsign(value, namespace=\"default\") .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[{\"href\": \"https://itsdangerous.palletsprojects.com/en/1.1.x/serializer/#the-salt\", \"label\": \"itsdangerous salt documentation\"}, {\"href\": \"https://itsdangerous.palletsprojects.com/\", \"label\": \"itsdangerous\"}]"}, {"id": "internals:datasette-track-event", "page": "internals", "ref": "datasette-track-event", "title": "await .track_event(event)", "content": "event  -  Event \n                         \n                             An instance of a subclass of  datasette.events.Event . \n                         \n                     \n                 \n                 Plugins can call this to track events, using classes they have previously registered. See  Event tracking  for details. \n                 The event will then be passed to all plugins that have registered to receive events using the  track_event(datasette, event)  hook. \n                 Example usage, assuming the plugin has previously registered the  BanUserEvent  class: \n                 await datasette.track_event(\n    BanUserEvent(user={\"id\": 1, \"username\": \"cleverbot\"})\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-unsign", "page": "internals", "ref": "datasette-unsign", "title": ".unsign(value, namespace=\"default\")", "content": "signed  - any serializable type \n                         \n                             The signed string that was created using  .sign(value, namespace=\"default\") . \n                         \n                     \n                     \n                         namespace  - string, optional \n                         \n                             The alternative namespace, if one was used. \n                         \n                     \n                 \n                 Returns the original, decoded object that was passed to  .sign(value, namespace=\"default\") . If the signature is not valid this raises a  itsdangerous.BadSignature  exception.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:datasette-verify-token", "page": "internals", "ref": "datasette-verify-token", "title": "await .verify_token(token)", "content": "token  - string \n                         \n                             The token string to verify. \n                         \n                     \n                 \n                 This is an  async  method that verifies an API token by trying each registered token handler in order. Returns an actor dictionary from the first handler that recognizes the token, or  None  if no handler accepts it. \n                 actor = await datasette.verify_token(token)\nif actor:\n    # Token was valid\n    print(actor[\"id\"])", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:id1", "page": "internals", "ref": "id1", "title": "TokenRestrictions", "content": "The  TokenRestrictions  class uses a builder pattern to specify which actions a token is allowed to perform. Import it from  datasette.tokens : \n                     from datasette.tokens import TokenRestrictions\n\nrestrictions = (\n    TokenRestrictions()\n    .allow_all(\"view-instance\")\n    .allow_all(\"view-table\")\n    .allow_database(\"docs\", \"view-query\")\n    .allow_resource(\"docs\", \"attachments\", \"insert-row\")\n    .allow_resource(\"docs\", \"attachments\", \"update-row\")\n) \n                     The builder methods are: \n                     \n                         \n                             allow_all(action)  - allow an action across all databases and resources \n                         \n                         \n                             allow_database(database, action)  - allow an action on a specific database \n                         \n                         \n                             allow_resource(database, resource, action)  - allow an action on a specific resource (table, SQL view or  canned query ) within a database \n                         \n                     \n                     Each method returns the  TokenRestrictions  instance so calls can be chained. \n                     TokenRestrictions  also provides an  abbreviated(datasette)  method which returns the restrictions as a dictionary using the compact format described in  Restricting the actions that a token can perform , with action names replaced by their registered abbreviations. It returns the inner dictionary only - the  \"_r\"  wrapping key shown in that section is not included. Returns  None  if no restrictions are set. This is useful when writing a custom  register_token_handler(datasette)  that needs to embed restrictions in a token payload. \n                     For example, the following restrictions: \n                     restrictions = (\n    TokenRestrictions()\n    .allow_all(\"view-instance\")\n    .allow_database(\"docs\", \"view-query\")\n    .allow_resource(\"docs\", \"attachments\", \"insert-row\")\n)\nrestrictions.abbreviated(datasette) \n                     Returns this dictionary, using the abbreviations registered for each action: \n                     {\n    \"a\": [\"vi\"],\n    \"d\": {\"docs\": [\"vq\"]},\n    \"r\": {\"docs\": {\"attachments\": [\"ir\"]}},\n} \n                     The following example creates a token that can access  view-instance  and  view-table  across everything, can additionally use  view-query  for anything in the  docs  database and is allowed to execute  insert-row  and  update-row  in the  attachments  table in that database: \n                     token = await datasette.create_token(\n    actor_id=\"user1\",\n    restrictions=(\n        TokenRestrictions()\n        .allow_all(\"view-instance\")\n        .allow_all(\"view-table\")\n        .allow_database(\"docs\", \"view-query\")\n        .allow_resource(\"docs\", \"attachments\", \"insert-row\")\n        .allow_resource(\"docs\", \"attachments\", \"update-row\")\n    ),\n)", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"await .create_token(actor_id, expires_after=None, restrictions=None, handler=None)\"]", "references": "[]"}, {"id": "internals:id2", "page": "internals", "ref": "id2", "title": ".get_internal_database()", "content": "Returns a database object for reading and writing to the private  internal database .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:internals", "page": "internals", "ref": "internals", "title": "Internals for plugins", "content": "Many  Plugin hooks  are passed objects that provide access to internal Datasette functionality. The interface to these objects should not be considered stable with the exception of methods that are documented here.", "breadcrumbs": "[]", "references": "[]"}, {"id": "internals:internals-csrf", "page": "internals", "ref": "internals-csrf", "title": "CSRF protection", "content": "Datasette protects against Cross-Site Request Forgery by inspecting the browser-set  Sec-Fetch-Site  and  Origin  headers on every unsafe (non- GET / HEAD / OPTIONS ) request, following the approach described in  Filippo Valsorda's article  and implemented in Go 1.25's  http.CrossOriginProtection . \n             A request is rejected with a  403  response if: \n             \n                 \n                     It carries  Sec-Fetch-Site  with any value other than  same-origin  or  none , or \n                 \n                 \n                     It has no  Sec-Fetch-Site  header but does carry an  Origin  header whose host does not match the request  Host . \n                 \n             \n             Requests from non-browser clients ( curl , server-to-server scripts, etc.) do not send  Sec-Fetch-Site  or  Origin  and pass through unchanged - CSRF is a browser-only attack. \n             No token, cookie, or hidden form field is needed. Any  <form method=\"POST\">  inside Datasette or a plugin will be accepted from the same origin without modification.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://words.filippo.io/csrf/\", \"label\": \"Filippo Valsorda's article\"}]"}, {"id": "internals:internals-database", "page": "internals", "ref": "internals-database", "title": "Database class", "content": "Instances of the  Database  class can be used to execute queries against attached SQLite databases, and to run introspection against their schemas.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-database-introspection", "page": "internals", "ref": "internals-database-introspection", "title": "Database introspection", "content": "The  Database  class also provides properties and methods for introspecting the database. \n                 \n                     \n                         db.name  - string \n                         \n                             The name of the database - usually the filename without the  .db  prefix. \n                         \n                     \n                     \n                         db.size  - integer \n                         \n                             The size of the database file in bytes. 0 for  :memory:  databases. \n                         \n                     \n                     \n                         db.mtime_ns  - integer or None \n                         \n                             The last modification time of the database file in nanoseconds since the epoch.  None  for  :memory:  databases. \n                         \n                     \n                     \n                         db.is_mutable  - boolean \n                         \n                             Is this database mutable, and allowed to accept writes? \n                         \n                     \n                     \n                         db.is_memory  - boolean \n                         \n                             Is this database an in-memory database? \n                         \n                     \n                     \n                         db.is_temp_disk  - boolean \n                         \n                             Is this database a temporary file-backed database? See  Database(ds, path=None, is_mutable=True, is_memory=False, memory_name=None, is_temp_disk=False)  for details. Temporary disk databases report  hash  as  None  but have real values for  size  and  mtime_ns  since they are backed by a file on disk. \n                         \n                     \n                     \n                         await db.attached_databases()  - list of named tuples \n                         \n                             Returns a list of additional databases that have been connected to this database using the SQLite ATTACH command. Each named tuple has fields  seq ,  name  and  file . \n                         \n                     \n                     \n                         await db.table_exists(table)  - boolean \n                         \n                             Check if a table called  table  exists. \n                         \n                     \n                     \n                         await db.view_exists(view)  - boolean \n                         \n                             Check if a view called  view  exists. \n                         \n                     \n                     \n                         await db.table_names()  - list of strings \n                         \n                             List of names of tables in the database. \n                         \n                     \n                     \n                         await db.view_names()  - list of strings \n                         \n                             List of names of views in the database. \n                         \n                     \n                     \n                         await db.table_columns(table)  - list of strings \n                         \n                             Names of columns in a specific table. \n                         \n                     \n                     \n                         await db.table_column_details(table)  - list of named tuples \n                         \n                             Full details of the columns in a specific table. Each column is represented by a  Column  named tuple with fields  cid  (integer representing the column position),  name  (string),  type  (string, e.g.  REAL  or  VARCHAR(30) ),  notnull  (integer 1 or 0),  default_value  (string or None),  is_pk  (integer 1 or 0). \n                         \n                     \n                     \n                         await db.primary_keys(table)  - list of strings \n                         \n                             Names of the columns that are part of the primary key for this table. \n                         \n                     \n                     \n                         await db.fts_table(table)  - string or None \n                         \n                             The name of the FTS table associated with this table, if one exists. \n                         \n                     \n                     \n                         await db.label_column_for_table(table)  - string or None \n                         \n                             The label column that is associated with this table - either automatically detected or using the  \"label_column\"  key in configuration, see  label_column . \n                         \n                     \n                     \n                         await db.foreign_keys_for_table(table)  - list of dictionaries \n                         \n                             Details of columns in this table which are foreign keys to other tables. A list of dictionaries where each dictionary is shaped like this:  {\"column\": string, \"other_table\": string, \"other_column\": string} . \n                         \n                     \n                     \n                         await db.hidden_table_names()  - list of strings \n                         \n                             List of tables which Datasette \"hides\" by default - usually these are tables associated with SQLite's full-text search feature, the SpatiaLite extension or tables hidden using the  hidden  feature. \n                         \n                     \n                     \n                         await db.get_table_definition(table)  - string \n                         \n                             Returns the SQL definition for the table - the  CREATE TABLE  statement and any associated  CREATE INDEX  statements. \n                         \n                     \n                     \n                         await db.get_view_definition(view)  - string \n                         \n                             Returns the SQL definition of the named view. \n                         \n                     \n                     \n                         await db.get_all_foreign_keys()  - dictionary \n                         \n                             Dictionary representing both incoming and outgoing foreign keys for every table in this database. Each key is a table name that points to a dictionary with two keys,  \"incoming\"  and  \"outgoing\" , each of which is a list of dictionaries with keys  \"column\" ,  \"other_table\"  and  \"other_column\" . For example: \n                             {\n  \"documents\": {\n    \"incoming\": [\n      {\n        \"other_table\": \"pages\",\n        \"column\": \"id\",\n        \"other_column\": \"document_id\"\n      }\n    ],\n    \"outgoing\": []\n  },\n  \"pages\": {\n    \"incoming\": [\n      {\n        \"other_table\": \"organization_pages\",\n        \"column\": \"id\",\n        \"other_column\": \"page_id\"\n      }\n    ],\n    \"outgoing\": [\n      {\n        \"other_table\": \"documents\",\n        \"column\": \"document_id\",\n        \"other_column\": \"id\"\n      }\n    ]\n  },\n  \"organization\": {\n    \"incoming\": [\n      {\n        \"other_table\": \"organization_pages\",\n        \"column\": \"id\",\n        \"other_column\": \"organization_id\"\n      }\n    ],\n    \"outgoing\": []\n  },\n  \"organization_pages\": {\n    \"incoming\": [],\n    \"outgoing\": [\n      {\n        \"other_table\": \"pages\",\n        \"column\": \"page_id\",\n        \"other_column\": \"id\"\n      },\n      {\n        \"other_table\": \"organization\",\n        \"column\": \"organization_id\",\n        \"other_column\": \"id\"\n      }\n    ]\n  }\n}", "breadcrumbs": "[\"Internals for plugins\", \"Database class\"]", "references": "[]"}, {"id": "internals:internals-datasette", "page": "internals", "ref": "internals-datasette", "title": "Datasette class", "content": "This object is an instance of the  Datasette  class, passed to many plugin hooks as an argument called  datasette . \n             You can create your own instance of this - for example to help write tests for a plugin - like so: \n             from datasette.app import Datasette\n\n# With no arguments a single in-memory database will be attached\ndatasette = Datasette()\n\n# The files= argument can load files from disk\ndatasette = Datasette(files=[\"/path/to/my-database.db\"])\n\n# Pass metadata as a JSON dictionary like this\ndatasette = Datasette(\n    files=[\"/path/to/my-database.db\"],\n    metadata={\n        \"databases\": {\n            \"my-database\": {\n                \"description\": \"This is my database\"\n            }\n        }\n    },\n) \n             Constructor parameters include: \n             \n                 \n                     files=[...]  - a list of database files to open \n                 \n                 \n                     immutables=[...]  - a list of database files to open in immutable mode \n                 \n                 \n                     metadata={...}  - a dictionary of  Metadata \n                 \n                 \n                     config_dir=...  - the  configuration directory  to use, stored in  datasette.config_dir", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-datasette-client", "page": "internals", "ref": "internals-datasette-client", "title": "datasette.client", "content": "Plugins can make internal simulated HTTP requests to the Datasette instance within which they are running. This ensures that all of Datasette's external JSON APIs are also available to plugins, while avoiding the overhead of making an external HTTP call to access those APIs. \n                 The  datasette.client  object is a wrapper around the  HTTPX Python library , providing an async-friendly API that is similar to the widely used  Requests library . \n                 It offers the following methods: \n                 \n                     \n                         await datasette.client.get(path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal GET request against that path. \n                         \n                     \n                     \n                         await datasette.client.post(path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal POST request. Use  data={\"name\": \"value\"}  to pass form parameters. \n                         \n                     \n                     \n                         await datasette.client.options(path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal OPTIONS request. \n                         \n                     \n                     \n                         await datasette.client.head(path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal HEAD request. \n                         \n                     \n                     \n                         await datasette.client.put(path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal PUT request. \n                         \n                     \n                     \n                         await datasette.client.patch(path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal PATCH request. \n                         \n                     \n                     \n                         await datasette.client.delete(path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal DELETE request. \n                         \n                     \n                     \n                         await datasette.client.request(method, path, **kwargs)  - returns HTTPX Response \n                         \n                             Execute an internal request with the given HTTP method against that path. \n                         \n                     \n                 \n                 These methods can be used with  datasette.urls  - for example: \n                 table_json = (\n    await datasette.client.get(\n        datasette.urls.table(\n            \"fixtures\", \"facetable\", format=\"json\"\n        )\n    )\n).json() \n                 datasette.client  methods automatically take the current  base_url  setting into account, whether or not you use the  datasette.urls  family of methods to construct the path. \n                 For documentation on available  **kwargs  options and the shape of the HTTPX Response object refer to the  HTTPX Async documentation .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[{\"href\": \"https://www.python-httpx.org/\", \"label\": \"HTTPX Python library\"}, {\"href\": \"https://requests.readthedocs.io/\", \"label\": \"Requests library\"}, {\"href\": \"https://www.python-httpx.org/async/\", \"label\": \"HTTPX Async documentation\"}]"}, {"id": "internals:internals-datasette-client-actor", "page": "internals", "ref": "internals-datasette-client-actor", "title": "Authenticating as an actor", "content": "All  datasette.client  methods accept an optional  actor=  parameter. When set to a dictionary describing an actor, the request is made with a signed  ds_actor  cookie identifying that actor \u2014 as if the request had been made by a user who is signed in as that actor. \n                     This is a convenient shorthand equivalent to signing the cookie manually using  datasette.client.actor_cookie() . \n                     Example usage: \n                     response = await datasette.client.get(\n    \"/-/actor.json\", actor={\"id\": \"root\"}\n)\nassert response.json() == {\"actor\": {\"id\": \"root\"}} \n                     This parameter works with all HTTP methods ( get ,  post ,  put ,  patch ,  delete ,  options ,  head ) and the generic  request  method. \n                     Passing both  actor=  and a  ds_actor  cookie via  cookies=  raises a  TypeError . Other unrelated cookies can be combined with  actor= .", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"datasette.client\"]", "references": "[]"}, {"id": "internals:internals-datasette-is-client", "page": "internals", "ref": "internals-datasette-is-client", "title": "Detecting internal client requests", "content": "datasette.in_client()  - returns bool \n                             \n                                 Returns  True  if the current code is executing within a  datasette.client  request,  False  otherwise. \n                             \n                         \n                     \n                     This method is useful for plugins that need to behave differently when called through  datasette.client  versus when handling external HTTP requests. \n                     Example usage: \n                     async def fetch_documents(datasette):\n    if not datasette.in_client():\n        return Response.text(\n            \"Only available via internal client requests\",\n            status=403,\n        )\n    ... \n                     Note that  datasette.in_client()  is independent of  skip_permission_checks . A request made through  datasette.client  will always have  in_client()  return  True , regardless of whether  skip_permission_checks  is set.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\", \"datasette.client\"]", "references": "[]"}, {"id": "internals:internals-datasette-urls", "page": "internals", "ref": "internals-datasette-urls", "title": "datasette.urls", "content": "The  datasette.urls  object contains methods for building URLs to pages within Datasette. Plugins should use this to link to pages, since these methods take into account any  base_url  configuration setting that might be in effect. \n                 \n                     \n                         datasette.urls.instance(format=None) \n                         \n                             Returns the URL to the Datasette instance root page. This is usually  \"/\" . \n                         \n                     \n                     \n                         datasette.urls.path(path, format=None) \n                         \n                             Takes a path and returns the full path, taking  base_url  into account. \n                             For example,  datasette.urls.path(\"-/logout\")  will return the path to the logout page, which will be  \"/-/logout\"  by default or  /prefix-path/-/logout  if  base_url  is set to  /prefix-path/ \n                         \n                     \n                     \n                         datasette.urls.logout() \n                         \n                             Returns the URL to the logout page, usually  \"/-/logout\" \n                         \n                     \n                     \n                         datasette.urls.static(path) \n                         \n                             Returns the URL of one of Datasette's default static assets, for example  \"/-/static/app.css\" \n                         \n                     \n                     \n                         datasette.urls.static_plugins(plugin_name, path) \n                         \n                             Returns the URL of one of the static assets belonging to a plugin. \n                             datasette.urls.static_plugins(\"datasette_cluster_map\", \"datasette-cluster-map.js\")  would return  \"/-/static-plugins/datasette_cluster_map/datasette-cluster-map.js\" \n                         \n                     \n                     \n                         datasette.urls.static(path) \n                         \n                             Returns the URL of one of Datasette's default static assets, for example  \"/-/static/app.css\" \n                         \n                     \n                     \n                         datasette.urls.database(database_name, format=None) \n                         \n                             Returns the URL to a database page, for example  \"/fixtures\" \n                         \n                     \n                     \n                         datasette.urls.table(database_name, table_name, format=None) \n                         \n                             Returns the URL to a table page, for example  \"/fixtures/facetable\" \n                         \n                     \n                     \n                         datasette.urls.query(database_name, query_name, format=None) \n                         \n                             Returns the URL to a query page, for example  \"/fixtures/pragma_cache_size\" \n                         \n                     \n                 \n                 These functions can be accessed via the  {{ urls }}  object in Datasette templates, for example: \n                 <a href=\"{{ urls.instance() }}\">Homepage</a>\n<a href=\"{{ urls.database(\"fixtures\") }}\">Fixtures database</a>\n<a href=\"{{ urls.table(\"fixtures\", \"facetable\") }}\">facetable table</a>\n<a href=\"{{ urls.query(\"fixtures\", \"pragma_cache_size\") }}\">pragma_cache_size query</a> \n                 Use the  format=\"json\"  (or  \"csv\"  or other formats supported by plugins) arguments to get back URLs to the JSON representation. This is the path with  .json  added on the end. \n                 These methods each return a  datasette.utils.PrefixedUrlString  object, which is a subclass of the Python  str  type. This allows the logic that considers the  base_url  setting to detect if that prefix has already been applied to the path.", "breadcrumbs": "[\"Internals for plugins\", \"Datasette class\"]", "references": "[]"}, {"id": "internals:internals-formdata", "page": "internals", "ref": "internals-formdata", "title": "The FormData class", "content": "await request.form()  returns a  FormData  object - a dictionary-like object which provides access to form fields and uploaded files. It has a similar interface to  MultiParams . \n             \n                 \n                     form[key]  - string or UploadedFile \n                     \n                         Returns the first value for that key, or raises a  KeyError  if the key is missing. \n                     \n                 \n                 \n                     form.get(key)  - string, UploadedFile, or None \n                     \n                         Returns the first value for that key, or  None  if the key is missing. Pass a second argument to specify a different default. \n                     \n                 \n                 \n                     form.getlist(key)  - list \n                     \n                         Returns the list of values for that key. If the key is missing an empty list will be returned. \n                     \n                 \n                 \n                     form.keys()  - list of strings \n                     \n                         Returns the list of available keys. \n                     \n                 \n                 \n                     key in form  - True or False \n                     \n                         You can use  if key in form  to check if a key is present. \n                     \n                 \n                 \n                     for key in form  - iterator \n                     \n                         This lets you loop through every available key. \n                     \n                 \n                 \n                     len(form)  - integer \n                     \n                         Returns the total number of submitted values.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-internal", "page": "internals", "ref": "internals-internal", "title": "Datasette's internal database", "content": "Datasette maintains an \"internal\" SQLite database used for configuration, caching, and storage. Plugins can store configuration, settings, and other data inside this database. By default, Datasette will use a temporary in-memory SQLite database as the internal database, which is created at startup and destroyed at shutdown. Users of Datasette can optionally pass in a  --internal  flag to specify the path to a SQLite database to use as the internal database, which will persist internal data across Datasette instances. \n             Datasette maintains tables called  catalog_databases ,  catalog_tables ,  catalog_views ,  catalog_columns ,  catalog_indexes ,  catalog_foreign_keys  with details of the attached databases and their schemas. These tables should not be considered a stable API - they may change between Datasette releases. \n             Metadata is stored in tables  metadata_instance ,  metadata_databases ,  metadata_resources  and  metadata_columns . Plugins can interact with these tables via the  get_*_metadata() and set_*_metadata() methods . \n             The internal database is not exposed in the Datasette application by default, which means private data can safely be stored without worry of accidentally leaking information through the default Datasette interface and API. However, other plugins do have full read and write access to the internal database. \n             Plugins can access this database by calling  internal_db = datasette.get_internal_database()  and then executing queries using the  Database API . \n             Plugin authors are asked to practice good etiquette when using the internal database, as all plugins use the same database to store data. For example: \n             \n                 \n                     Use a unique prefix when creating tables, indices, and triggers in the internal database. If your plugin is called  datasette-xyz , then prefix names with  datasette_xyz_* . \n                 \n                 \n                     Avoid long-running write statements that may stall or block other plugins that are trying to write at the same time. \n                 \n                 \n                     Use temporary tables or shared in-memory attached databases when possible. \n                 \n                 \n                     Avoid implementing features that could expose private data stored in the internal database by other plugins.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-internal-schema", "page": "internals", "ref": "internals-internal-schema", "title": "Internal database schema", "content": "The internal database schema is as follows: \n                 [[[cog\nfrom metadata_doc import internal_schema\ninternal_schema(cog) \n                 ]]] \n                 CREATE TABLE catalog_databases (\n    database_name TEXT PRIMARY KEY,\n    path TEXT,\n    is_memory INTEGER,\n    schema_version INTEGER\n);\nCREATE TABLE catalog_tables (\n    database_name TEXT,\n    table_name TEXT,\n    rootpage INTEGER,\n    sql TEXT,\n    PRIMARY KEY (database_name, table_name),\n    FOREIGN KEY (database_name) REFERENCES catalog_databases(database_name)\n);\nCREATE TABLE catalog_views (\n    database_name TEXT,\n    view_name TEXT,\n    rootpage INTEGER,\n    sql TEXT,\n    PRIMARY KEY (database_name, view_name),\n    FOREIGN KEY (database_name) REFERENCES catalog_databases(database_name)\n);\nCREATE TABLE catalog_columns (\n    database_name TEXT,\n    table_name TEXT,\n    cid INTEGER,\n    name TEXT,\n    type TEXT,\n    \"notnull\" INTEGER,\n    default_value TEXT, -- renamed from dflt_value\n    is_pk INTEGER, -- renamed from pk\n    hidden INTEGER,\n    PRIMARY KEY (database_name, table_name, name),\n    FOREIGN KEY (database_name) REFERENCES catalog_databases(database_name),\n    FOREIGN KEY (database_name, table_name) REFERENCES catalog_tables(database_name, table_name)\n);\nCREATE TABLE catalog_indexes (\n    database_name TEXT,\n    table_name TEXT,\n    seq INTEGER,\n    name TEXT,\n    \"unique\" INTEGER,\n    origin TEXT,\n    partial INTEGER,\n    PRIMARY KEY (database_name, table_name, name),\n    FOREIGN KEY (database_name) REFERENCES catalog_databases(database_name),\n    FOREIGN KEY (database_name, table_name) REFERENCES catalog_tables(database_name, table_name)\n);\nCREATE TABLE catalog_foreign_keys (\n    database_name TEXT,\n    table_name TEXT,\n    id INTEGER,\n    seq INTEGER,\n    \"table\" TEXT,\n    \"from\" TEXT,\n    \"to\" TEXT,\n    on_update TEXT,\n    on_delete TEXT,\n    match TEXT,\n    PRIMARY KEY (database_name, table_name, id, seq),\n    FOREIGN KEY (database_name) REFERENCES catalog_databases(database_name),\n    FOREIGN KEY (database_name, table_name) REFERENCES catalog_tables(database_name, table_name)\n);\nCREATE TABLE metadata_instance (\n    key text,\n    value text,\n    unique(key)\n);\nCREATE TABLE metadata_databases (\n    database_name text,\n    key text,\n    value text,\n    unique(database_name, key)\n);\nCREATE TABLE metadata_resources (\n    database_name text,\n    resource_name text,\n    key text,\n    value text,\n    unique(database_name, resource_name, key)\n);\nCREATE TABLE metadata_columns (\n    database_name text,\n    resource_name text,\n    column_name text,\n    key text,\n    value text,\n    unique(database_name, resource_name, column_name, key)\n);\nCREATE TABLE column_types (\n    database_name TEXT NOT NULL,\n    resource_name TEXT NOT NULL,\n    column_name TEXT NOT NULL,\n    column_type TEXT NOT NULL,\n    config TEXT,\n    PRIMARY KEY (database_name, resource_name, column_name)\n); \n                 [[[end]]]", "breadcrumbs": "[\"Internals for plugins\", \"Datasette's internal database\"]", "references": "[]"}, {"id": "internals:internals-multiparams", "page": "internals", "ref": "internals-multiparams", "title": "The MultiParams class", "content": "request.args  is a  MultiParams  object - a dictionary-like object which provides access to query string parameters that may have multiple values. \n             Consider the query string  ?foo=1&foo=2&bar=3  - with two values for  foo  and one value for  bar . \n             \n                 \n                     request.args[key]  - string \n                     \n                         Returns the first value for that key, or raises a  KeyError  if the key is missing. For the above example  request.args[\"foo\"]  would return  \"1\" . \n                     \n                 \n                 \n                     request.args.get(key)  - string or None \n                     \n                         Returns the first value for that key, or  None  if the key is missing. Pass a second argument to specify a different default, e.g.  q = request.args.get(\"q\", \"\") . \n                     \n                 \n                 \n                     request.args.getlist(key)  - list of strings \n                     \n                         Returns the list of strings for that key.  request.args.getlist(\"foo\")  would return  [\"1\", \"2\"]  in the above example.  request.args.getlist(\"bar\")  would return  [\"3\"] . If the key is missing an empty list will be returned. \n                     \n                 \n                 \n                     request.args.keys()  - list of strings \n                     \n                         Returns the list of available keys - for the example this would be  [\"foo\", \"bar\"] . \n                     \n                 \n                 \n                     key in request.args  - True or False \n                     \n                         You can use  if key in request.args  to check if a key is present. \n                     \n                 \n                 \n                     for key in request.args  - iterator \n                     \n                         This lets you loop through every available key. \n                     \n                 \n                 \n                     len(request.args)  - integer \n                     \n                         Returns the number of keys.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-permission-classes", "page": "internals", "ref": "internals-permission-classes", "title": "Permission classes and utilities", "content": "", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-permission-sql", "page": "internals", "ref": "internals-permission-sql", "title": "PermissionSQL class", "content": "The  PermissionSQL  class is used by plugins to contribute SQL-based permission rules through the  permission_resources_sql(datasette, actor, action)  hook. This enables efficient permission checking across multiple resources by leveraging SQLite's query engine. \n                 from datasette.permissions import PermissionSQL\n\n\n@dataclass\nclass PermissionSQL:\n    source: str  # Plugin name for auditing\n    sql: str  # SQL query returning permission rules\n    params: Dict[str, Any]  # Parameters for the SQL query \n                 Attributes: \n                 \n                     \n                         source  - string \n                         \n                             An identifier for the source of these permission rules, typically the plugin name. This is used for debugging and auditing. \n                         \n                     \n                     \n                         sql  - string \n                         \n                             A SQL query that returns permission rules. The query must return rows with the following columns: \n                             \n                                 \n                                     parent  (TEXT or NULL) - The parent resource identifier (e.g., database name) \n                                 \n                                 \n                                     child  (TEXT or NULL) - The child resource identifier (e.g., table name) \n                                 \n                                 \n                                     allow  (INTEGER) - 1 for allow, 0 for deny \n                                 \n                                 \n                                     reason  (TEXT) - A human-readable explanation of why this permission was granted or denied \n                                 \n                             \n                         \n                     \n                     \n                         params  - dictionary \n                         \n                             A dictionary of parameters to bind into the SQL query. Parameter names should not include the  :  prefix.", "breadcrumbs": "[\"Internals for plugins\", \"Permission classes and utilities\"]", "references": "[]"}, {"id": "internals:internals-request", "page": "internals", "ref": "internals-request", "title": "Request object", "content": "The request object is passed to various plugin hooks. It represents an incoming HTTP request. It has the following properties: \n             \n                 \n                     .scope  - dictionary \n                     \n                         The ASGI scope that was used to construct this request, described in the  ASGI HTTP connection scope  specification. \n                     \n                 \n                 \n                     .method  - string \n                     \n                         The HTTP method for this request, usually  GET  or  POST . \n                     \n                 \n                 \n                     .url  - string \n                     \n                         The full URL for this request, e.g.  https://latest.datasette.io/fixtures . \n                     \n                 \n                 \n                     .scheme  - string \n                     \n                         The request scheme - usually  https  or  http . \n                     \n                 \n                 \n                     .headers  - dictionary (str -> str) \n                     \n                         A dictionary of incoming HTTP request headers. Header names have been converted to lowercase. \n                     \n                 \n                 \n                     .cookies  - dictionary (str -> str) \n                     \n                         A dictionary of incoming cookies \n                     \n                 \n                 \n                     .host  - string \n                     \n                         The host header from the incoming request, e.g.  latest.datasette.io  or  localhost . \n                     \n                 \n                 \n                     .path  - string \n                     \n                         The path of the request excluding the query string, e.g.  /fixtures . \n                     \n                 \n                 \n                     .full_path  - string \n                     \n                         The path of the request including the query string if one is present, e.g.  /fixtures?sql=select+sqlite_version() . \n                     \n                 \n                 \n                     .query_string  - string \n                     \n                         The query string component of the request, without the  ?  - e.g.  name__contains=sam&age__gt=10 . \n                     \n                 \n                 \n                     .args  - MultiParams \n                     \n                         An object representing the parsed query string parameters, see below. \n                     \n                 \n                 \n                     .url_vars  - dictionary (str -> str) \n                     \n                         Variables extracted from the URL path, if that path was defined using a regular expression. See  register_routes(datasette) . \n                     \n                 \n                 \n                     .actor  - dictionary (str -> Any) or None \n                     \n                         The currently authenticated actor (see  actors ), or  None  if the request is unauthenticated. \n                     \n                 \n             \n             The object also has the following awaitable methods: \n             \n                 \n                     await request.form(files=False, ...)  - FormData \n                     \n                         Parses form data from the request body. Supports both  application/x-www-form-urlencoded  and  multipart/form-data  content types. \n                         Returns a  The FormData class  object with dict-like access to form fields and uploaded files. \n                         Requirements and errors: \n                         \n                             \n                                 A  Content-Type  header is required. Missing or unsupported content types raise  BadRequest . \n                             \n                             \n                                 For  multipart/form-data , the  boundary=...  parameter is required. \n                             \n                         \n                         Parameters: \n                         \n                             \n                                 files  (bool, default  False ): If  True , uploaded files are stored and accessible. If  False  (default), file content is discarded but form fields are still available. \n                             \n                             \n                                 max_file_size  (int, default 50MB): Maximum size per uploaded file in bytes. \n                             \n                             \n                                 max_request_size  (int, default 100MB): Maximum total request body size in bytes. \n                             \n                             \n                                 max_fields  (int, default 1000): Maximum number of form fields. \n                             \n                             \n                                 max_files  (int, default 100): Maximum number of uploaded files. \n                             \n                             \n                                 max_parts  (int, default  max_fields + max_files ): Maximum number of multipart parts in total. \n                             \n                             \n                                 max_field_size  (int, default 100KB): Maximum size of a text field value in bytes. \n                             \n                             \n                                 max_memory_file_size  (int, default 1MB): File size threshold before uploads spill to disk. \n                             \n                             \n                                 max_part_header_bytes  (int, default 16KB): Maximum total bytes allowed in part headers. \n                             \n                             \n                                 max_part_header_lines  (int, default 100): Maximum header lines per part. \n                             \n                             \n                                 min_free_disk_bytes  (int, default 50MB): Minimum free bytes required in the temp directory before accepting file uploads. \n                             \n                         \n                         Example usage: \n                         # Parse form fields only (files are discarded)\nform = await request.form()\nusername = form[\"username\"]\ntags = form.getlist(\"tags\")  # For multiple values\n\n# Parse form fields AND files\nform = await request.form(files=True)\nuploaded = form[\"avatar\"]\ncontent = await uploaded.read()\nprint(\n    uploaded.filename, uploaded.content_type, uploaded.size\n) \n                         Cleanup note: \n                         When using  files=True , call  await form.aclose()  once you are done with the uploads\n                            to ensure spooled temporary files are closed promptly. You can also use\n                             async with form: ...  for automatic cleanup. \n                         Don't forget to read about  CSRF protection ! \n                     \n                 \n                 \n                     await request.post_vars()  - dictionary \n                     \n                         Returns a dictionary of form variables that were submitted in the request body via  POST  using  application/x-www-form-urlencoded  encoding. For multipart forms or file uploads, use  request.form()  instead. \n                     \n                 \n                 \n                     await request.post_body()  - bytes \n                     \n                         Returns the un-parsed body of a request submitted by  POST  - useful for things like incoming JSON data. \n                     \n                 \n             \n             And a class method that can be used to create fake request objects for use in tests: \n             \n                 \n                     fake(path_with_query_string, method=\"GET\", scheme=\"http\", url_vars=None) \n                     \n                         Returns a  Request  instance for the specified path and method. For example: \n                         from datasette import Request\nfrom pprint import pprint\n\nrequest = Request.fake(\n    \"/fixtures/facetable/\",\n    url_vars={\"database\": \"fixtures\", \"table\": \"facetable\"},\n)\npprint(request.scope) \n                         This outputs: \n                         {'http_version': '1.1',\n 'method': 'GET',\n 'path': '/fixtures/facetable/',\n 'query_string': b'',\n 'raw_path': b'/fixtures/facetable/',\n 'scheme': 'http',\n 'type': 'http',\n 'url_route': {'kwargs': {'database': 'fixtures', 'table': 'facetable'}}}", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://asgi.readthedocs.io/en/latest/specs/www.html#connection-scope\", \"label\": \"ASGI HTTP connection scope\"}]"}, {"id": "internals:internals-response", "page": "internals", "ref": "internals-response", "title": "Response class", "content": "The  Response  class can be returned from view functions that have been registered using the  register_routes(datasette)  hook. \n             The  Response()  constructor takes the following arguments: \n             \n                 \n                     body  - string \n                     \n                         The body of the response. \n                     \n                 \n                 \n                     status  - integer (optional) \n                     \n                         The HTTP status - defaults to 200. \n                     \n                 \n                 \n                     headers  - dictionary (optional) \n                     \n                         A dictionary of extra HTTP headers, e.g.  {\"x-hello\": \"world\"} . \n                     \n                 \n                 \n                     content_type  - string (optional) \n                     \n                         The content-type for the response. Defaults to  text/plain . \n                     \n                 \n             \n             For example: \n             from datasette.utils.asgi import Response\n\nresponse = Response(\n    \"<xml>This is XML</xml>\",\n    content_type=\"application/xml; charset=utf-8\",\n) \n             The quickest way to create responses is using the  Response.text(...) ,  Response.html(...) ,  Response.json(...)  or  Response.redirect(...)  helper methods: \n             from datasette.utils.asgi import Response\n\nhtml_response = Response.html(\"This is HTML\")\njson_response = Response.json({\"this_is\": \"json\"})\ntext_response = Response.text(\n    \"This will become utf-8 encoded text\"\n)\n# Redirects are served as 302, unless you pass status=301:\nredirect_response = Response.redirect(\n    \"https://latest.datasette.io/\"\n) \n             Each of these responses will use the correct corresponding content-type -  text/html; charset=utf-8 ,  application/json; charset=utf-8  or  text/plain; charset=utf-8  respectively. \n             Each of the helper methods take optional  status=  and  headers=  arguments, documented above.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-response-asgi-send", "page": "internals", "ref": "internals-response-asgi-send", "title": "Returning a response with .asgi_send(send)", "content": "In most cases you will return  Response  objects from your own view functions. You can also use a  Response  instance to respond at a lower level via ASGI, for example if you are writing code that uses the  asgi_wrapper(datasette)  hook. \n                 Create a  Response  object and then use  await response.asgi_send(send) , passing the ASGI  send  function. For example: \n                 async def require_authorization(scope, receive, send):\n    response = Response.text(\n        \"401 Authorization Required\",\n        headers={\n            \"www-authenticate\": 'Basic realm=\"Datasette\", charset=\"UTF-8\"'\n        },\n        status=401,\n    )\n    await response.asgi_send(send)", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"}, {"id": "internals:internals-response-set-cookie", "page": "internals", "ref": "internals-response-set-cookie", "title": "Setting cookies with response.set_cookie()", "content": "To set cookies on the response, use the  response.set_cookie(...)  method. The method signature looks like this: \n                 def set_cookie(\n    self,\n    key,\n    value=\"\",\n    max_age=None,\n    expires=None,\n    path=\"/\",\n    domain=None,\n    secure=False,\n    httponly=False,\n    samesite=\"lax\",\n): ... \n                 You can use this with  datasette.sign()  to set signed cookies. Here's how you would set the  ds_actor cookie  for use with Datasette  authentication : \n                 response = Response.redirect(\"/\")\nresponse.set_cookie(\n    \"ds_actor\",\n    datasette.sign({\"a\": {\"id\": \"cleopaws\"}}, \"actor\"),\n)\nreturn response", "breadcrumbs": "[\"Internals for plugins\", \"Response class\"]", "references": "[]"}, {"id": "internals:internals-shortcuts", "page": "internals", "ref": "internals-shortcuts", "title": "Import shortcuts", "content": "The following commonly used symbols can be imported directly from the  datasette  module: \n             from datasette import Response\nfrom datasette import Forbidden\nfrom datasette import NotFound\nfrom datasette import hookimpl\nfrom datasette import actor_matches_allow", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-tilde-encoding", "page": "internals", "ref": "internals-tilde-encoding", "title": "Tilde encoding", "content": "Datasette uses a custom encoding scheme in some places, called  tilde encoding . This is primarily used for table names and row primary keys, to avoid any confusion between  /  characters in those values and the Datasette URLs that reference them. \n                 Tilde encoding uses the same algorithm as  URL percent-encoding , but with the  ~  tilde character used in place of  % . \n                 Any character other than  ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz0123456789_-  will be replaced by the numeric equivalent preceded by a tilde. For example: \n                 \n                     \n                         /  becomes  ~2F \n                     \n                     \n                         .  becomes  ~2E \n                     \n                     \n                         %  becomes  ~25 \n                     \n                     \n                         ~  becomes  ~7E \n                     \n                     \n                         Space becomes  + \n                     \n                     \n                         polls/2022.primary  becomes  polls~2F2022~2Eprimary \n                     \n                 \n                 Note that the space character is a special case: it will be replaced with a  +  symbol. \n                 \n                 \n                 \n                     datasette.utils. tilde_encode s :   str str \n                     \n                         Returns tilde-encoded string - for example  /foo/bar  ->  ~2Ffoo~2Fbar \n                     \n                 \n                 \n                 \n                 \n                     datasette.utils. tilde_decode s :   str str \n                     \n                         Decodes a tilde-encoded string, so  ~2Ffoo~2Fbar  ->  /foo/bar", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[{\"href\": \"https://developer.mozilla.org/en-US/docs/Glossary/percent-encoding\", \"label\": \"URL percent-encoding\"}]"}, {"id": "internals:internals-tracer", "page": "internals", "ref": "internals-tracer", "title": "datasette.tracer", "content": "Running Datasette with  --setting trace_debug 1  enables trace debug output, which can then be viewed by adding  ?_trace=1  to the query string for any page. \n             You can see an example of this at the bottom of  latest.datasette.io/fixtures/facetable?_trace=1 . The JSON output shows full details of every SQL query that was executed to generate the page. \n             The  datasette-pretty-traces  plugin can be installed to provide a more readable display of this information. You can see  a demo of that here . \n             You can add your own custom traces to the JSON output using the  trace()  context manager. This takes a string that identifies the type of trace being recorded, and records any keyword arguments as additional JSON keys on the resulting trace object. \n             The start and end time, duration and a traceback of where the trace was executed will be automatically attached to the JSON object. \n             This example uses trace to record the start, end and duration of any HTTP GET requests made using the function: \n             from datasette.tracer import trace\nimport httpx\n\n\nasync def fetch_url(url):\n    with trace(\"fetch-url\", url=url):\n        async with httpx.AsyncClient() as client:\n            return await client.get(url)", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://latest.datasette.io/fixtures/facetable?_trace=1\", \"label\": \"latest.datasette.io/fixtures/facetable?_trace=1\"}, {\"href\": \"https://datasette.io/plugins/datasette-pretty-traces\", \"label\": \"datasette-pretty-traces\"}, {\"href\": \"https://latest-with-plugins.datasette.io/github/commits?_trace=1\", \"label\": \"a demo of that here\"}]"}, {"id": "internals:internals-tracer-trace-child-tasks", "page": "internals", "ref": "internals-tracer-trace-child-tasks", "title": "Tracing child tasks", "content": "If your code uses a mechanism such as  asyncio.gather()  to execute code in additional tasks you may find that some of the traces are missing from the display. \n                 You can use the  trace_child_tasks()  context manager to ensure these child tasks are correctly handled. \n                 from datasette import tracer\n\nwith tracer.trace_child_tasks():\n    results = await asyncio.gather(\n        # ... async tasks here\n    ) \n                 This example uses the  register_routes()  plugin hook to add a page at  /parallel-queries  which executes two SQL queries in parallel using  asyncio.gather()  and returns their results. \n                 from datasette import hookimpl\nfrom datasette import tracer\n\n\n@hookimpl\ndef register_routes():\n    async def parallel_queries(datasette):\n        db = datasette.get_database()\n        with tracer.trace_child_tasks():\n            one, two = await asyncio.gather(\n                db.execute(\"select 1\"),\n                db.execute(\"select 2\"),\n            )\n        return Response.json(\n            {\n                \"one\": one.single_value(),\n                \"two\": two.single_value(),\n            }\n        )\n\n    return [\n        (r\"/parallel-queries$\", parallel_queries),\n    ] \n                 Note that running parallel SQL queries in this way has  been known to cause problems in the past , so treat this example with caution. \n                 Adding  ?_trace=1  will show that the trace covers both of those child tasks.", "breadcrumbs": "[\"Internals for plugins\", \"datasette.tracer\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/2189\", \"label\": \"been known to cause problems in the past\"}]"}, {"id": "internals:internals-uploadedfile", "page": "internals", "ref": "internals-uploadedfile", "title": "The UploadedFile class", "content": "When parsing multipart form data with  files=True , file uploads are returned as  UploadedFile  objects with the following properties and methods: \n             \n                 \n                     uploaded_file.name  - string \n                     \n                         The form field name. \n                     \n                 \n                 \n                     uploaded_file.filename  - string \n                     \n                         The original filename provided by the client. Note: This is sanitized to remove path components for security. \n                     \n                 \n                 \n                     uploaded_file.content_type  - string or None \n                     \n                         The MIME type of the uploaded file, if provided by the client. \n                     \n                 \n                 \n                     uploaded_file.size  - integer \n                     \n                         The size of the uploaded file in bytes. \n                     \n                 \n                 \n                     await uploaded_file.read(size=-1)  - bytes \n                     \n                         Read and return up to  size  bytes from the file. If  size  is -1 (default), read the entire file. \n                     \n                 \n                 \n                     await uploaded_file.seek(offset, whence=0)  - integer \n                     \n                         Seek to the given position in the file. Returns the new position. \n                     \n                 \n                 \n                     await uploaded_file.close() \n                     \n                         Close the underlying file. This is called automatically when the object is garbage collected. \n                     \n                 \n             \n             Files smaller than 1MB are stored in memory. Larger files are automatically spilled to temporary files on disk and cleaned up when the request completes. \n             Example: \n             form = await request.form(files=True)\nuploaded = form[\"document\"]\n\n# Check file metadata\nprint(f\"Filename: {uploaded.filename}\")\nprint(f\"Content-Type: {uploaded.content_type}\")\nprint(f\"Size: {uploaded.size} bytes\")\n\n# Read file content\ncontent = await uploaded.read()\n\n# Or read in chunks\nawait uploaded.seek(0)\nwhile chunk := await uploaded.read(8192):\n    process_chunk(chunk)", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[]"}, {"id": "internals:internals-utils", "page": "internals", "ref": "internals-utils", "title": "The datasette.utils module", "content": "The  datasette.utils  module contains various utility functions used by Datasette. As a general rule you should consider anything in this module to be unstable - functions and classes here could change without warning or be removed entirely between Datasette releases, without being mentioned in the release notes. \n             The exception to this rule is anything that is documented here. If you find a need for an undocumented utility function in your own work, consider  opening an issue  requesting that the function you are using be upgraded to documented and supported status.", "breadcrumbs": "[\"Internals for plugins\"]", "references": "[{\"href\": \"https://github.com/simonw/datasette/issues/new\", \"label\": \"opening an issue\"}]"}, {"id": "internals:internals-utils-async-call-with-supported-arguments", "page": "internals", "ref": "internals-utils-async-call-with-supported-arguments", "title": "await async_call_with_supported_arguments(fn, ", "content": "Async version of  call_with_supported_arguments . Use this for  async def  callback functions. \n                 \n                 \n                     async   datasette.utils. async_call_with_supported_arguments fn ** kwargs \n                     \n                         Async version of  call_with_supported_arguments() . \n                         Calls  await fn(...)  with the subset of  **kwargs  matching its\n                            signature. \n                         \n                             \n                                 Parameters \n                                 \n                                     \n                                         \n                                             fn  -- An async callable \n                                         \n                                         \n                                             kwargs  -- All available keyword arguments \n                                         \n                                     \n                                 \n                             \n                             \n                                 Returns \n                                 \n                                     The return value of  await fn(...)", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"}, {"id": "internals:internals-utils-await-me-maybe", "page": "internals", "ref": "internals-utils-await-me-maybe", "title": "await_me_maybe(value)", "content": "Utility function for calling  await  on a return value if it is awaitable, otherwise returning the value. This is used by Datasette to support plugin hooks that can optionally return awaitable functions. Read more about this function in  The \u201cawait me maybe\u201d pattern for Python asyncio . \n                 \n                 \n                     async   datasette.utils. await_me_maybe value :   Any Any \n                     \n                         If value is callable, call it. If awaitable, await it. Otherwise return it.", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[{\"href\": \"https://simonwillison.net/2020/Sep/2/await-me-maybe/\", \"label\": \"The \u201cawait me maybe\u201d pattern for Python asyncio\"}]"}, {"id": "internals:internals-utils-call-with-supported-arguments", "page": "internals", "ref": "internals-utils-call-with-supported-arguments", "title": "call_with_supported_arguments(fn, ", "content": "Call  fn , passing it only those keyword arguments that match its function signature. This implements a dependency injection pattern - the caller provides all available arguments, and the function receives only the ones it declares as parameters. \n                 This is useful in plugins that want to define callback functions that only declare the arguments they need. For example: \n                 from datasette.utils import call_with_supported_arguments\n\n\ndef my_callback(request, datasette): ...\n\n\n# This will pass only request and datasette, ignoring other kwargs:\ncall_with_supported_arguments(\n    my_callback,\n    request=request,\n    datasette=datasette,\n    database=database,\n    table=table,\n) \n                 \n                 \n                     datasette.utils. call_with_supported_arguments fn ** kwargs \n                     \n                         Call  fn  with the subset of  **kwargs  matching its signature. \n                         This implements dependency injection: the caller provides all available\n                            keyword arguments and the function receives only the ones it declares\n                            as parameters. \n                         \n                             \n                                 Parameters \n                                 \n                                     \n                                         \n                                             fn  -- A callable (sync function) \n                                         \n                                         \n                                             kwargs  -- All available keyword arguments \n                                         \n                                     \n                                 \n                             \n                             \n                                 Returns \n                                 \n                                     The return value of  fn", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"}, {"id": "internals:internals-utils-named-parameters", "page": "internals", "ref": "internals-utils-named-parameters", "title": "named_parameters(sql)", "content": "Derive the list of  :named  parameters referenced in a SQL query. \n                 \n                 \n                     datasette.utils. named_parameters sql :   str List [ str ] \n                     \n                         Given a SQL statement, return a list of named parameters that are used in the statement \n                         e.g. for  select * from foo where id=:id  this would return  [\"id\"]", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"}, {"id": "internals:internals-utils-parse-metadata", "page": "internals", "ref": "internals-utils-parse-metadata", "title": "parse_metadata(content)", "content": "This function accepts a string containing either JSON or YAML, expected to be of the format described in  Metadata . It returns a nested Python dictionary representing the parsed data from that string. \n                 If the metadata cannot be parsed as either JSON or YAML the function will raise a  utils.BadMetadataError  exception. \n                 \n                 \n                     datasette.utils. parse_metadata content :   str dict \n                     \n                         Detects if content is JSON or YAML and parses it appropriately.", "breadcrumbs": "[\"Internals for plugins\", \"The datasette.utils module\"]", "references": "[]"}, {"id": "internals:permission-sql-parameters", "page": "internals", "ref": "permission-sql-parameters", "title": "Available SQL parameters", "content": "When writing SQL for  PermissionSQL , the following parameters are automatically available: \n                     \n                         \n                             :actor  - JSON string or NULL \n                             \n                                 The full actor dictionary serialized as JSON. Use SQLite's  json_extract()  function to access fields: \n                                 json_extract(:actor, '$.role') = 'admin'\njson_extract(:actor, '$.team') = 'engineering' \n                             \n                         \n                         \n                             :actor_id  - string or NULL \n                             \n                                 The actor's  id  field, for simple equality comparisons: \n                                 :actor_id = 'alice' \n                             \n                         \n                         \n                             :action  - string \n                             \n                                 The action being checked (e.g.,  \"view-table\" ,  \"insert-row\" ,  \"execute-sql\" ). \n                             \n                         \n                     \n                     Example usage: \n                     Here's an example plugin that grants view-table permissions to users with an \"analyst\" role for tables in the \"analytics\" database: \n                     from datasette import hookimpl\nfrom datasette.permissions import PermissionSQL\n\n\n@hookimpl\ndef permission_resources_sql(datasette, actor, action):\n    if action != \"view-table\":\n        return None\n\n    return PermissionSQL(\n        source=\"my_analytics_plugin\",\n        sql=\"\"\"\n            SELECT 'analytics' AS parent,\n                   NULL AS child,\n                   1 AS allow,\n                   'Analysts can view analytics database' AS reason\n            WHERE json_extract(:actor, '$.role') = 'analyst'\n              AND :action = 'view-table'\n        \"\"\",\n        params={},\n    ) \n                     A more complex example that uses custom parameters: \n                     @hookimpl\ndef permission_resources_sql(datasette, actor, action):\n    if not actor:\n        return None\n\n    user_teams = actor.get(\"teams\", [])\n\n    return PermissionSQL(\n        source=\"team_permissions_plugin\",\n        sql=\"\"\"\n            SELECT\n                team_database AS parent,\n                team_table AS child,\n                1 AS allow,\n                'User is member of team: ' || team_name AS reason\n            FROM team_permissions\n            WHERE user_id = :user_id\n              AND :action IN ('view-table', 'insert-row', 'update-row')\n        \"\"\",\n        params={\"user_id\": actor.get(\"id\")},\n    ) \n                     Permission resolution rules: \n                     When multiple  PermissionSQL  objects return conflicting rules for the same resource, Datasette applies the following precedence: \n                     \n                         \n                             Specificity : Child-level rules (with both  parent  and  child ) override parent-level rules (with only  parent ), which override root-level rules (with neither  parent  nor  child ) \n                         \n                         \n                             Deny over allow : At the same specificity level, deny ( allow=0 ) takes precedence over allow ( allow=1 ) \n                         \n                         \n                             Implicit deny : If no rules match a resource, access is denied by default", "breadcrumbs": "[\"Internals for plugins\", \"Permission classes and utilities\", \"PermissionSQL class\"]", "references": "[]"}], "truncated": false}