Get webhook notifications whenever Network & Infrastructure creates an incident, updates an incident, resolves an incident or changes a component status.
There was an optical flap at RBX DC.
We are investigating
Update(s):
Date: 2017-12-07 01:37:34 UTC 600G between RBX et GSW (Paris) are up
Date: 2017-12-07 00:27:59 UTC Everything is working now.
We are so sorry for this breakdown, we will working on this to avoid any other issue like this one
Date: 2017-12-07 00:06:05 UTC Links to BRU POP are now UP
Date: 2017-12-07 00:05:48 UTC 600G between RBX and AMS
Date: 2017-12-07 00:05:47 UTC 600G between RBX and AMS
Date: 2017-12-06 23:55:29 UTC we brought a 100G link between RBX and AMS
Date: 2017-12-06 23:22:44 UTC We are working to bring up the links between RBX and SBG. 200G are actually UP
Date: 2017-12-06 23:09:08 UTC RBX<>BHS UP
Date: 2017-12-06 23:08:53 UTC RBX<>LDN 600G UP
Date: 2017-12-06 22:42:44 UTC RBX<>FRA 400G UP
Date: 2017-12-06 21:56:26 UTC 4 other circuits are UP.
We rebuild the other circuits that are still down.
Date: 2017-12-06 21:48:26 UTC 4 GSW <> RBX circuits are UP. The others are reactivated.
Date: 2017-12-06 21:37:53 UTC We rebuild the circuits by hand Paris <> RBX.
In a few minutes we should have 400G of UP capability which will reduce the latency in the network.
Date: 2017-12-06 21:25:11 UTC Isolation of VAC Roubaix to reduce saturation
Date: 2017-12-06 21:17:45 UTC Traffic isolation between Amsterdam and Warsaw to reduce saturation
Date: 2017-12-06 21:17:09 UTC The configuration is in place but it does not work.
We erase each configuration and rebuild them to
by hand.
RBX <> LDN: UP
Date: 2017-12-06 21:16:51 UTC Good evening,
On November 9, we encountered a big problem
on our optical network at RBX. The problem was related
to a software bug on the equipment we use
which caused the deletion of the configuration.
Since then we have updated the equipment on everything
our network. Also to prevent this type of bug from
never again causes a worry about our DCs, we have
decided to divide equipment clusters into 3
on the RBX website. So, if we ever have again
this bug, the configuration would only impact 30%
traffic.
During the preparation of the maintenance that was to
start at 23:00, the configuration disappeared again
at 8:20 pm and all the links were down again. !!!!!
The database has been deleted while we are using
the latest software version. So there is another bug!
http://travaux.ovh.net/?do=details&id=28835
We quickly returned the configuration. Some
links have returned, but not yet all. Currently
traffic passes Paris> GRA> AMS> RBX instead of
directly Paris> RBX. Hence the important latencies.
We look with Cisco to understand why all
the links are not UP while the configuration was
delivery to RBX.
We are not going to do the intervention tonight.
I want to understand why the configuration fades
and how can we do this maintenance without
again have the least concern about production!
Sincerely
Octave
Date: 2017-12-06 21:15:27 UTC We are shutting AMSIX / Worldstream temporarily.
Date: 2017-12-06 21:14:42 UTC We are going back 100G links to RBX.
For now, we find a saturation between RBX and AMS.