There may be occasions when it is desirable or necessary to mass-delete data from a table. You may want to perform this deletion with a predefined set of conditions, or you may want to simply wipe the table clean. Regardless of the situation, excepting corrupt or inaccessible data, there is a deletion method that will suit your needs. The methods that will be outlined in this article are as follows, ordered from the most simple approach to the most advanced:
- UI actions
- Clone exclude rules
- Table cleanup policies
Deciding the best approach will vary on a case-by-case basis, and it will be up to you to determine the best option in a given situation. For large quantities of records, there is no fast option except for clone exclude rules, but these may not be suitable for your situation. If you have business-critical need to delete multiple millions of records from a table and cannot wait for the amount of time required by the methods in this article, you will need to open an incident with ServiceNow. This should be treated as an absolute last resort.
DISCLAIMER: No portion of this article is intended to be implemented as-is on a live instance without thorough validation and testing. The contents of this article are provided without warranty or assurance of suitability for any given situation. Usage of any script or implementation provided in this article is done at your own risk.
Before deciding how to go about deleting data in bulk, it's important to consider some key points:
- What cascade deletions will occur? (Child/related records)
- What references will be broken/cleared?
- What business rules will be triggered?
- What workflows will run?
- What other features could be impacted? (Cross-scope privilege generation, etc.)
- Will my deletion be tracked by update sets?
Performing the appropriate research beforehand can help you determine the potential side-effects of your deletion, and will help you to formulate a good plan for tidying up afterwards if necessary. For example, you may want to devise a plan for replacing broken references. With a decent understanding of the moving parts that will come into play, you can make a more educated decision about whether or not it is safe and responsible to proceed with your plan.
It's also important to consider possible backout plans. In most cases, exporting and downloading a large amount of records is not viable, so the backout options for a mass deletion are limited. The Delete Recovery tool available in London is a great answer for this, as it will record deletions and allow you to roll them back, even restoring broken references. You can read about the Delete Recovery tool here: Roll back and delete recovery
That said, there is no magic "undo" button! Even with the advanced delete recovery tools in London, you must use extreme caution when formulating and executing your plan.
Deleting With UI Actions
- Very simple
- Vulnerable to cancellation by UI transaction quota rules
This can be considered the "easy" approach. If you would like to delete all the records from a table, you can simply open the sys_db_object record for that table and click the "Delete All Records" UI action. By default, UI transactions like this are limited to about 5 minutes. If they exceed that duration, they will automatically be cancelled. Thus, if your table contains a lot of data, or data that will trigger a lot of business rules or cascade deletions, you may need to increase the transaction quota in order to perform the operation in one go.
You can also use the list view to filter a table, select multiple records, and click the "Delete" UI action. However, due to pagination, you are limited in the number of records you can select and delete at once. Thus, for a large amount of records, the list view approach may not be viable.
Deletions done this way will trigger business rules and workflows, and will be tracked by update sets. These UI actions rely internally on the usage of a GlideRecord object and the deleteMultiple() function. Thus, the performance of these methods is comparable to this script:
var gr = new GlideRecord("table_name"); gr.query(); gr.deleteMultiple();
Deleting With Clone Exclude Rules
- Very fast
- Extremely inflexible, will truncate the entire table
- Can leave broken references and possibly orphan records
- Instance must be cloned over, not viable for production
When data is excluded by a clone, the excluded table is truncated. Table truncation is a relatively cheap operation, and is much faster than deleting all the records from a table. However, any references to the excluded records on other tables will be broken. That means the referencing fields will still contain the sys ID of records that no longer exist. This may result in unintended consequences, so it's important to consider that point.
Another risk of direct table truncation is orphaned data. For table-per-class (TPC) hierarchies, each extended table exists as a separate table on the database. The separate physical tables in the hierarchy are then joined to form complete rows. If you truncate a table in the hierarchy, you will remove the data from that physical table, but the joined rows on the parent and/or child tables will remain. Data that has been corrupted this way is considered orphaned, and will no longer be accessible through normal means at the platform level. Thus, if you are excluding a table that is part of a TPC hierarchy (unlike the Task and CMDB hierarchies), you should be sure to exclude all parent and child tables as well to avoid creating orphaned data.
Deleting With Table Cleanup Policies
- Set-it-and-forget-it feature, runs in background on a schedule
- Flexible configuration options
- Will not trigger business rules/workflows (unless table has iterativeDelete attribute)
- Respects the reference cascade rule
- Designed for maintenance, not ideal for one-time deletions
This is a good solution if you wish to routinely delete records that fit some simple criteria, such as closed tasks that have not been updated after a specific amount of time. The table cleaner job runs once per hour. Unless the table has the iterativeDelete attribute set to true, the underlying code does not rely on a GlideRecord to perform the deletion. That means no business rules or workflows will be triggered by the deletion, which can significantly improve performance. That also means the deletion will not be audited or tracked by update sets. However, if the aforementioned attribute is present and set to true, these engines will be triggered. When implementing a table cleanup policy on a table, we should look to make sure that there is a supporting index to assist the cleanup condition, especially on large tables.
Here is an example of a table cleanup policy that will delete incident records in the "Closed" state that have not been updated in 30 days:
- Near-limitless flexibility
- Can bypass business rules, workflows, and other engines
- Can be run directly, or turned into a business rule, UI action, scheduled job, etc.
- Not vulnerable to transaction quota rules (if run in background)
- Requires scripting knowledge
The design and logic of a scripted deletion depends on the requirement. It could be as simple as deleting every record from a table without triggering business rules and workflows, or it could contain advanced conditional logic and perform additional cleanup operations. Here is an example of the former, which simply deletes all records on the specified table:
var gr = new GlideRecord("table_name"); gr.query(); gr.setWorkflow(false); // Bypass business rules and workflows gr.deleteMultiple();
Applying conditional logic enables you to be more specific about your deletion. Here is an example of the same script, modified to only delete records matching certain conditions:
var gr = new GlideRecord("table_name"); gr.addQuery("state", "closed"); gr.addQuery("category", "4ca01be0db31eb009540e15b8a961936"); gr.addNullQuery("user"); gr.query(); gr.setWorkflow(false); // Bypass business rules and workflows gr.deleteMultiple();
Using setWorkflow(false) will also suppress update set tracking and cascade deletion. It is a good idea to use this when you do not need any of these engines and simply want the data deleted as quickly as possible. It is also good practice to add some logging to your script to make it easier to debug and evaluate results. You can put a script like this into a scheduled script execution job and run it on a schedule. You can even create custom UI actions on tables where bulk deletion based on certain logic will be a common end-user requirement.
IMPORTANT: Never implement an untested script on a live production instance without thorough testing. Even a small mistake, such as an incorrectly spelled column name, can lead to unintended results. It is best to clone over a sandbox or testing instance to use for experimentation, where any lost data can easily be recovered by cloning again.
Maintenance and Prevention
Keeping your database as lean as possible will yield significant performance and usability benefits, allowing your data to be more accessible and more manageable. Having a well-disciplined strategy for retiring old data that satisfies compliance regulations while keeping your database fast will have a significant positive impact on your company's operations on the ServiceNow platform.