Sunday, April 12, 2026

MAS 9.1 Configuration for SMTP with Office 365 using OAuth 2.0 for sending emails

MAS 9.1 configuration for SMTP server with Office 365 using OAuth 2.0 for sending emails from Maximo.


Microsoft plans to retire Basic Authentication for client connection. So, we need to know the OAuth configuration for sending emails.

Before configuring MAS, gather the following details from Office 365

Parameter Value
Tenant ID Used in Token URL
51624263-3eg2-2371-871c-d2176f17d56c
Token URL https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/token
for example, https://login.microsoftonline.com/51624263-3eg2-2371-871c-d2176f17d56c/oauth2/v2.0/token
Scope "offline_access https://outlook.office365.com/IMAP.AccessAsUser.All"
Client ID Used in OAuth Handler
gc2ec841-b89c-3f29-9853-17d034c434f9
Client Secret Used in OAuth Handler
Eow8Q~_jUB3NfevPmA_hhWpWUricBFtDzhaubcCa
Grant Type refresh_token
Refresh Token Used in OAuth Handler
1.AUEApSSkgQMdy0ukCtEC8h412ktCgSNmRCJOvKDhF1
redirect_uri https://hostname/maximo
Email address Email address having client authorization.
It will appear as "Sent from" in Emails.


Configuration steps in MAS 9.1 are as follows. 

OAuth Client

  • Open End Points Application, Click on Add/Modify OAuth Clients.
  • Add a new End point OAuth Client, using the above information.
  • Don't select Access Token persistent? check box - as it will store the data in MAXOAUTHCLIENT table and throw error if the refresh token generated by Microsoft is more than ACCESSTOKEN column length. 

By default, the length of ACCESSTOKEN & REFRESHTOKEN columns are 2000. In real time scenarios, the token of Microsoft and other applications for smtp mail server or E-mail Listenser mailbox configuration is more than 2000 characters. If you plan to use persistent token values, please increase the length of these fields. 


Office365 OAuth Refresh Token Generation 

If the refresh token is expired, we can follow the below link to generate a new one. 

Office365 OAuth Generating a Refresh Token

It's recommended to consult email service provider to know the refresh token lifespan and expiration after inactivity. 

As per the instructions, we need to store the files in the path c:\dev\ps


Output File with refresh token



SMTP System Properties

In MAS 9.1, a new system property mxe.mail.oauth.clientName is introduced to set the OAuth client name for mail server. 

System Property Value
mxe.mail.oauth.clientName OAuth Client MAILOAUTH
mail.smtp.host smtp.office365.com
mail.smtp.port 587
mxe.smtp.user no-reply-mas@ibm.com



To disable OAuth for sending emails, please set the mxe.mail.oauth.clientName property value to null. 

Saturday, March 7, 2026

Maximo Database Table Data Purging

Maximo Database Table Data Purging involves removing unused data, indexes, and attachments that can slow down database read operations and consume unnecessary storage. 

Performing regular data purging helps improve database performance and reduce the storage cost of owning the cloud database service for clients.

Prerequisite: Always take a full Database backup before running purge scripts, especially in Production environment.

Common Data Cleanup Activities 
  • Removing non-critical business data from out-of-the-box Maximo tables 
      Certain Maximo tables store operational or historical data that may no longer be required after a period of time. for example, CRONTASKHISTORY, ESCREPEATTRACK etc., Below are some sample scripts that can be used to purge old records.

-- delete crontask history records that are more than 2 years old based on start time
delete from crontaskhistory 
where starttime < dateadd(YEAR,-2,GETDATE());

-- delete communication log (includes escalation) 
-- that are more than 8 years old based on created date delete from commlog
where createdate < dateadd (YEAR,-8,GETDATE());
-- delete escalation repeat tracking records that have records in historyflag delete from escrepeattrack
where escalation='POCLOSE' and ownerid in (select poid from po where historyflag=1); delete from escrepeattrack
where escalation='SRCLOSED' and ownerid in (select ticketuid from sr where historyflag=1); delete from escrepeattrack
where escalation='ESCBLTNEXP' and ownerid in (select BULLETINBOARDuid from BULLETINBOARD where expiredate < dateadd(DAY,-1,GETDATE())); delete from escrepeattrack
where escalation in ('WOCAN','WOCLOSE') and ownerid in (select workorderid from workorder where historyflag = 1); delete from escrepeattrack
where escalation in (select escalation from escalation where active = 0);
  • Delete child records first before parent records to avoid orphan records and maintain referential integrity in the database.
-- delete Message Tracking records 
-- that are more than 3 months old based on initial date time delete from maxintmsgtrkdtl
where meamsgid in (select meamsgid from MAXINTMSGTRK where initialdatetime < dateadd(DAY,-90,GETDATE())); delete from maxintmsgtrk
where initialdatetime < dateadd(DAY,-90,GETDATE()); -- delete Integration Error Messages
-- that are more than 2 years old based on statusdate delete from maxinterrormsg
where messageid in (select messageid from maxinterror where statusdate < dateadd(YEAR,-2,GETDATE())); delete from maxinterror
where statusdate < dateadd(YEAR,-2,GETDATE());
  • Batch Deletions 
        Deleting large volume of data in a single transaction can fill up transaction log and impact database performance. So, it's recommended to perform the delete operation in batches.

-- SQL Server Script
SET NOCOUNT ON; DECLARE @DeletedRows INT = 0; DECLARE @TotalDeleted BIGINT = 0; DECLARE @BatchStart DATETIME; DECLARE @StartTime DATETIME = GETDATE(); PRINT CONCAT('Cleanup started at ', CONVERT(VARCHAR(19), @StartTime, 120), '.'); WHILE 1 = 1 BEGIN SET @BatchStart = GETDATE(); delete top (50000) from escrepeattrack
where escalation='WOCLOSE' and objectname = 'WORKORDER' and not exists (select 1 from workorder (nolock)
where escrepeattrack.ownerid = workorderid); SET @DeletedRows = @@ROWCOUNT; SET @TotalDeleted += @DeletedRows; PRINT CONCAT('Deleted ', @DeletedRows, ' rows in this batch (started at ', CONVERT(VARCHAR(19), @BatchStart, 120), '). Total deleted so far: ', @TotalDeleted, '.');         
        IF @DeletedRows = 0
BREAK;
         -- Checkpoint after each batch to flush data and logs CHECKPOINT; -- Optional pause to reduce load WAITFOR DELAY '00:00:04'; END;
  • Reclaim Unused Data After Large Deletions - Database dependent
    • When a large number of records are deleted from a table, unused space may still remain allocated.
    • For MS SQL Server, rebuild the clustered indexes on the table after cleanup process to reclaim unused data pages and improves overall database storage efficiency. 
    • This approach varies for other databases like DB2 or Oracle.  
ALTER INDEX [indexname] ON [dbo].[tablename] 
REBUILD PARTITION = ALL WITH (PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF,
IGNORE_DUP_KEY = OFF, ONLINE = ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 MINUTES, ABORT_AFTER_WAIT = NONE)), ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = OFF, DATA_COMPRESSION = PAGE)
  • Create a procedure with all delete scripts if you plan to run same set of scripts against multiple databases and schedule to run in fixed intervals.
  • Identifying Unused Tables and Indexes 
    • Identify indexes and tables that aren't being used with help of DBA
    • Disable the Indexes and drop the tables
    • Always review the list with customer or business stakeholders before taking any action           

  • Cleaning Up Unlinked Attachments
          Attachments from doclinks table may not be linked to valid records. 
select * from doclinks a 
where a.doclinksid in (select doclinksid from doclinks
where ownertable='WORKORDER' and ownerid not in
(select workorderid from workorder) ) ;


Best practices
  • Automate purge scripts using stored procedure if you plan to run same set of scripts against multiple databases
  • Schedule cleanup jobs during off-peak hours 
  • Test scripts in lower environments before production
  • Monitor performance impact
Courtesy: Nausheel Patadia