Simwood Status
All Systems Operational
Wholesale VoIP Operational
London ? Operational
Slough ? Operational
Manchester ? Operational
Fax ? Operational
Virtual Interconnect ? Operational
Customer Interfaces Operational
API ? Operational
Portal ? Operational
Network ? Operational
London Network ? Operational
Slough Network ? Operational
Manchester Network ? Operational
Hosting ? Operational
vCenter London ? Operational
vCenter Slough ? Operational
vCenter Manchester ? Operational
Colo London ? Operational
Colo Slough ? Operational
Colo Manchester ? Operational
Support Operational
Operations Desk ? Operational
Porting Desk Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Scheduled Maintenance
API - End of clear HTTP support Feb 28, 23:45 - Mar 1, 00:15 UTC
As announced on 23rd December 2016, we will be discontinuing support for clear HTTP requests to the API effective from 1st March 2017. After this date all requests to the API must be using HTTPS (TLSv1 or newer).

Most customers will be unaffected by this, but we are aware of a small minority who still make unencrypted HTTP requests to the API. We have given three months notice of this change, and we ask that you take this time to ensure that your applications are making requests to https://api.simwood.com/ before this date.

For more information please see http://blog.simwood.com/2016/12/securing-api-requests-https/
Posted on Dec 23, 12:32 UTC
Past Incidents
Feb 25, 2017

No incidents reported today.

Feb 24, 2017
Postmortem - Read details
Feb 24, 18:58 UTC
Resolved - This incident is confirmed resolved. The RFO from BT is not exactly comprehensive and requires further conversations. We will post more in our own RFO.

We'd like to thank all customers for their understanding and compliments on how this was handled. Losing 50% of our capacity into/from BT is quite a big deal that could have been a major outage. The way the network is designed ensured it wasn't, and our hierarchical priority of channel allocations (http://blog.simwood.com/2016/07/relaxed-channel-limits/) ensured that all Reserved and Best Efforts capacity was honoured. A proportion of Below Best Efforts traffic (i.e. that above any allocated channel limit which we'd normally let pass if we could) was constrained. We consider this a far better outcome than the alternative.
Feb 24, 15:43 UTC
Update - Circuits appear to have remained stable. BT were scheduled to work on this job at 2pm but the engineer has instead been tasked with establishing what happened. We will confirm that and mark this incident resolved when we hear the outcome.
Feb 24, 14:19 UTC
Monitoring - We have seen our London circuits come back up. We await feedback from BT and will be monitoring before putting them back into service.
Feb 24, 12:44 UTC
Identified - We are seeing all our circuits facing BT from our London Telehouse site as down. This has been reported to BT and we will update this incident with feedback.

Due to our unique architecture this is not service affecting as calls from BT are seamlessly flowing via Slough, but it does however reduce our redundancy to n (from n+1) in respect of BT and our overall capacity to/from BT. Virtual Interconnect and Managed Interconnect customers have reserved capacity for this eventuality, and whilst there is presently adequate headroom customers without a commitment may see capacity constrained at times.
Feb 24, 11:05 UTC
Feb 23, 2017
Resolved - We're still investigating the underlying cause of this however it appears to have been an isolated issue affecting the, off-net hosted, portal connecting to the Simwood API.

Customers using the Simwood API directly were unaffected.
Feb 23, 11:19 UTC
Investigating - We have received reports from some customers regarding intermittent issues with some functions in the Simwood Portal.

We are investigating this as a priority, more information will follow shortly.
Feb 23, 10:36 UTC
Feb 22, 2017
Resolved - This incident has been resolved.
Feb 22, 15:36 UTC
Monitoring - As a result of the temporary interruption to the service at our Manchester site, we are aware of some delays to CDR processing, meaning some of your most recent call information may not be visible in the Portal or via the API.

Please accept our apologies for any inconvenience this causes.
Feb 22, 12:19 UTC
Postmortem - Read details
Feb 22, 16:11 UTC
Resolved - Things have remained stable so we are marking this resolved. A RFO will follow shortly.
Feb 22, 14:32 UTC
Monitoring - This was rectified at approx 11.43. We're very sorry it wasn't done at 6am when it was requested of Equinix.

Naturally, we're continuing to monitor and looking for the underlying issues.
Feb 22, 11:55 UTC
Update - We've given up waiting for Equinix. A kind local engineer will be on-site in 10 minutes to re-patch for us, assuming they're capable of letting him in.
Feb 22, 10:49 UTC
Update - We continue to wait for Equinix to perform the repatching of a cable requested 4 hours ago. The situation is otherwise unchanged from previously.
Feb 22, 10:10 UTC
Update - We continue to await a response from Equinix - we're well into the 3rd hour now.

Meanwhile, portal and API access remain degraded, i.e. functions in some way depending on Manchester will be erratic. Voice service remains unaffected for customers configured in accordance with our interop information.
Feb 22, 08:57 UTC
Update - To clarify previous updates, external connectivity to Manchester was unaffected but we lost connectivity to the stack of access switches which sit in front of all hosts there (due to all LAGs up to routers becoming blocked). This was overcome with a temporary reconfiguration at the router side to disable link aggregation. However, we presently have no control of that switch stack including by direct console (i.e. it is unresponsive) and whilst it is passing packets and all services in Manchester are responding, connectivity to them appears erratic. We await a response from remote hands with a view to regaining control to investigate and remedy.

At the present time all services in Manchester should be considered unstable but customers configured as per our guidance will already have had calls swung to other sites. Voice service is thus unaffected for these customers and volumes are normal for this time of day.
Feb 22, 07:17 UTC
Update - We are still aware of some connectivity issues reaching manchester from some other locations. We are continuing to investigate and will keep updating this page as more information becomes available.
Feb 22, 05:50 UTC
Identified - Connectivity to the Manchester site has been restored, we're still investigating the underlying fault.
Feb 22, 05:04 UTC
Investigating - We are aware of an issue affecting our Manchester site.

We are investigating this as a priority and will update this as more information becomes available.
Feb 22, 04:15 UTC
Feb 21, 2017

No incidents reported.

Feb 20, 2017

No incidents reported.

Feb 19, 2017

No incidents reported.

Feb 18, 2017

No incidents reported.

Feb 17, 2017

No incidents reported.

Feb 16, 2017

No incidents reported.

Feb 15, 2017

No incidents reported.

Feb 14, 2017

No incidents reported.

Feb 13, 2017

No incidents reported.

Feb 12, 2017

No incidents reported.

Feb 11, 2017

No incidents reported.