Manchester Connectivity
Incident Report for Simwood
Postmortem

This issue was related to one of our Manchester ethernet customers managing to create a loop between two remote office sites. This caused a broadcast storm which was not apparent in our monitoring (it should have been) and resulted in CPU on the access switch stack rising. Whilst all forwarding is performed in hardware on all our network equipment, the CPU is obviously required for routing protocols. The consequence of this therefore was intermittent and eventually lost connectivity up to the redundant routers above them.

The culprit customer's ports were disabled to rectify the immediate problem and then on examination of logs they were reconfigured with elevated loop detection.

All has remained stable since and we apologise for any inconvenience caused. This has highlighted inadequacies in our monitoring which we will address as soon as possible.

Posted Dec 24, 2015 - 13:36 UTC

Resolved
This issue has been resolved, and is being monitored closely.

More information to follow.
Posted Dec 24, 2015 - 12:51 UTC
Identified
The issue has been identified as likely being caused by an access switch in Manchester.

We are working to resolve this as quickly as possible and will update this page as soon as more information is available.

Information reported by the portal and API may be delayed (and therefore inaccurate for a short time)
Posted Dec 24, 2015 - 12:35 UTC
Investigating
We are experiencing connectivity issues in our Manchester site which is impacting some call routing.

Engineers are investigating as a priority, we will update this with more information as soon as it becomes available.
Posted Dec 24, 2015 - 12:20 UTC