There's a scheduled job named "Service Model's Blob Reaper", which functions as a table cleaner for some of the tables used by Service Mapping.

This job is not executed very often (once a week), but once it does it may result with high memory consumption, causing high load on the node and badly affects instance performance. In these cases it may also take hours to run.

Steps to Reproduce

Execute "Service Model's Blob Reaper" from sys_trigger.


1. Go to 'Service Model's Blob Reaper' script in sysauto_script.

2. Edit the script:
Set the glide.service.blob.enableCaching property to "false" at the beginning. 
This will disable the BlobSoftCache during the job's execution and rely only on database queries .
Please note that it may affect other Service Model processes running in the background, if there are any.

3. At the bottom of the script, set this property back to "true", so it won't affect Service Model processes executed later -  This is an important step

4. Below is an example of the script, provided by our development team 

try { 
gs.setProperty('glide.service.blob.enableCaching', 'false'); 
} finally { 
gs.setProperty('glide.service.blob.enableCaching', 'true'); 

5. Once the Blob Reaper and Discovery jobs completed, memory returned to a normal threshold 

Related Problem: PRB1264241

Seen In

There is no data to report.

Intended Fix Version

Kingston Patch 11

Fixed In

London Patch 3

Safe Harbor Statement

This "Intended Fix Version" information is meant to outline ServiceNow's general product direction and should not be relied upon in making a purchasing decision. The information provided here is for information purposes only and may not be incorporated into any contract. It is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for our products remains at ServiceNow's sole discretion.

Associated Community Threads

There is no data to report.

Article Information

Last Updated:2018-10-22 09:02:53