Database lock
Incident Report for Simwood
Resolved
This incident has been resolved.
Posted May 23, 2019 - 10:30 UTC
Update
This is now fully resolved.
Posted May 23, 2019 - 10:30 UTC
Monitoring
A fix has been implemented for this and we're continuing to monitor the situation closely.

CDRs are still backlogged, but we're processing these as fast as possible and this should be completed shortly, however, the information available in the API and portal may be slightly delayed.

Provisioning should now work as expected, and the porting desk is operating normally. Any numbers due to port will be processed shortly.
Posted May 23, 2019 - 09:21 UTC
Identified
The issue has been identified and a fix is being implemented.
Posted May 23, 2019 - 07:27 UTC
Investigating
We're presently unable to write CDRs so billing will be delayed. Number provisioning (including porting) is also not possible at this time along with other operations which require writing to our primary database cluster. This does not affect call routing or primary services as they do not use the database.

Earlier this morning one of the nodes in the cluster restarted and it is being synched across the network from others. Normally it uses one, leaving another available for writes, but this time it is using two. That sync needs to conclude from at least one node before writes can resume.

To repeat, this does not affect call routing or reading from the API or portal, only making changes that require writing to permanent storage.
Posted May 23, 2019 - 07:26 UTC
This incident affected: API and Portal.