{"id": "contributing:contributing-debugging", "page": "contributing", "ref": "contributing-debugging", "title": "Debugging", "content": "Any errors that occur while Datasette is running while display a stack trace on the console. \n You can tell Datasette to open an interactive pdb debugger session if an error occurs using the --pdb option: \n datasette --pdb fixtures.db", "breadcrumbs": "[\"Contributing\"]", "references": "[]"} {"id": "contributing:contributing-formatting-black", "page": "contributing", "ref": "contributing-formatting-black", "title": "Running Black", "content": "Black will be installed when you run pip install -e '.[test]' . To test that your code complies with Black, run the following in your root datasette repository checkout: \n black . --check \n All done! \u2728 \ud83c\udf70 \u2728\n95 files would be left unchanged. \n If any of your code does not conform to Black you can run this to automatically fix those problems: \n black . \n reformatted ../datasette/setup.py\nAll done! \u2728 \ud83c\udf70 \u2728\n1 file reformatted, 94 files left unchanged.", "breadcrumbs": "[\"Contributing\", \"Code formatting\"]", "references": "[]"} {"id": "contributing:contributing-using-fixtures", "page": "contributing", "ref": "contributing-using-fixtures", "title": "Using fixtures", "content": "To run Datasette itself, type datasette . \n You're going to need at least one SQLite database. A quick way to get started is to use the fixtures database that Datasette uses for its own tests. \n You can create a copy of that database by running this command: \n python tests/fixtures.py fixtures.db \n Now you can run Datasette against the new fixtures database like so: \n datasette fixtures.db \n This will start a server at http://127.0.0.1:8001/ . \n Any changes you make in the datasette/templates or datasette/static folder will be picked up immediately (though you may need to do a force-refresh in your browser to see changes to CSS or JavaScript). \n If you want to change Datasette's Python code you can use the --reload option to cause Datasette to automatically reload any time the underlying code changes: \n datasette --reload fixtures.db \n You can also use the fixtures.py script to recreate the testing version of metadata.json used by the unit tests. To do that: \n python tests/fixtures.py fixtures.db fixtures-metadata.json \n Or to output the plugins used by the tests, run this: \n python tests/fixtures.py fixtures.db fixtures-metadata.json fixtures-plugins\nTest tables written to fixtures.db\n- metadata written to fixtures-metadata.json\nWrote plugin: fixtures-plugins/register_output_renderer.py\nWrote plugin: fixtures-plugins/view_name.py\nWrote plugin: fixtures-plugins/my_plugin.py\nWrote plugin: fixtures-plugins/messages_output_renderer.py\nWrote plugin: fixtures-plugins/my_plugin_2.py \n Then run Datasette like this: \n datasette fixtures.db -m fixtures-metadata.json --plugins-dir=fixtures-plugins/", "breadcrumbs": "[\"Contributing\"]", "references": "[]"} {"id": "contributing:general-guidelines", "page": "contributing", "ref": "general-guidelines", "title": "General guidelines", "content": "main should always be releasable . Incomplete features should live in branches. This ensures that any small bug fixes can be quickly released. \n \n \n The ideal commit should bundle together the implementation, unit tests and associated documentation updates. The commit message should link to an associated issue. \n \n \n New plugin hooks should only be shipped if accompanied by a separate release of a non-demo plugin that uses them.", "breadcrumbs": "[\"Contributing\"]", "references": "[]"}