Audit and compliance solution guide: Produce the Liquibase tables and reports to document all changes

Once you have enabled all of the Liquibase tools that allow you to document all changes for an audit, you are ready to produce the tables and reports for your auditor.

Before you begin

Ensure you have completed the Audit and compliance solution guide: document all changes workflow so all tables and reports are enabled to produce each successfully.

Procedure

1

Create the DBCLH table

1. Produce the DBCLH table

  • To create the DBCLH table in STDOUT, run the dbcl-history command. Generate human-readable JSON output in the CLI by running this command:

liquibase --url="jdbc:postgresql://localhost:5432/mydb" --username=<postgres> --password=<yourpassword> --dbclhistory-enabled=true --verbose dbcl-history --format=JSON_PRETTY

  • You can use the --output-file Global parameter to send the output to a file:

liquibase --url="jdbc:postgresql://localhost:5432/mydb" --username=<postgres> --password=<yourpassword> --dbclhistory-enabled=true --verbose --output-file="dbcl-history-output.json" dbcl-history --format=JSON_PRETTY

2. Capture extensions or SQL

By default, whenever the DBCLH table is enabled, Liquibase automatically captures all extension information when you run commands. It stores that data in the DBCLH table under the column EXTENSIONS. You can control this behavior by setting --dbclhistory-capture-extensions.

Also, Liquibase automatically captures all SQL it generates in commands like update and rollback. It stores that SQL in the DBCLH table under the column EXECUTEDSQL. You can control this behavior by setting the --dbclhistory-capture-sql parameter.

If you want to disable extension information or SQL capturing, set either parameter mentioned previously to false.

3. Set a severity level

By default, Liquibase returns an exit code of 1 if it halts while recording an operation to the DBCLH table. If you want to change the severity level of the DBCLH table, set the --dbclhistory-severity parameter to a different value.

The DBCLH table is now queryable in your database.

2

Analyze the structured logging JSON file.

In steps 2-4 on the Audit and compliance solution guide for documenting all changes, you enabled Structured logging. Now we will produce that JSON file and analyze it in Elastic. Note: It's important to know that this is a long-term feature for users. After 30-90 days, depending on your deployment rate, you can begin to gain insights into database deployment trends from the structured logging JSON file.

1. Produce the JSON file by running your version of the command below. Ensure you have your unique url, changelog, and log file name specified. liquibase --url="jdbc:dynamodb://your-region" --changelog-file="changelog.xml" --log-format=JSON --log-file="liquibase-logs.json" update

2. Follow these steps to analyze and use Structured logging data.

3. Save these files so they are available to distribute to your auditor.

Examples

[2023-02-27 16:37:00] FINE [liquibase.integration] Performance monitoring disabled
[2023-02-27 16:37:00] FINE [liquibase.configuration] No configuration value for liquibase.outputFile found
[2023-02-27 16:37:00] FINE [liquibase.configuration] No configuration value for liquibase.strict found

3

Produce the Observability reports.

Once you have enabled the Operation reports in step 5 in the Audit and compliance solution guide for documenting all changes, you can produce the reports at any time. You can also write reports directly to cloud storage such as AWS S3 or Azure Blob storage. It's important to know that this is a long-term feature for users, and in three to six months, you will have collected enough data to show trends from all of these reports.

Run each report command to produce them for your auditor:

#Checks report:
liquibase checks run \
--report-enabled=true \
--report-name=my_checks_run_report.html 

Results

Structured logging reports and all of the Observability reports are produced and available to distribute to your auditor.