Category: Facilities

Sonic Expands Gigabit Fiber for Businesses

Sonic has completed Gigabit Fiber Internet construction in the business park network at the Sonoma County Airport, and last week began to activate new Gigabit business customers.

The new network spans nine miles, passes hundreds of businesses, lighting over 200 buildings. Some of these locations only had T1 (1.5Mbps) services available prior to the build-out of Sonic Gigabit Fiber Internet.

This is Sonic’s second completed business park fiber build-out, after the Corporate Center park in Southwest Santa Rosa which was completed last year. Next up, construction is underway to serve businesses in Petaluma off North McDowell and the Redwood business park.

The business Gigabit Fiber Internet product offers Gigabit (1000Mbps) Internet access plus Hosted PBX to the desktop; a complete business communications suite. Pricing is $40 per employee or desk per month for Gigabit Internet access, cloud phone service and unlimited nationwide calling. Custom solutions are also available including building interconnection for campus WANs, SIP trunking, PRI and POTS.

Business fiber services are part of Sonic’s overall fiber initiatives, and support the expansion of network capacity and backbone throughout our regional footprint.

Here are a few photos related to the Airport project:

San Francisco Cabinets

The San Francisco Business Times reports that a San Francisco judge has rejected a challenge to AT&T’s planned cabinet deployment, which will soon deliver AT&T’s U-verse broadband and television services.

I’ve written in the past in support of the infrastructure necessary for broadband service delivery, and I am heartened by this ruling that the cabinets are not subject to environmental review.

That said, cabinets can be a magnet for graffiti, and service providers should minimize their cabinet footprint while monitoring for incidents of graffiti. Cleanup must be swift when damage does occur.

Sonic.net’s own plan to deliver Gigabit Fiber-to-the-home in San Francisco is moving along, with a number of regulatory and permitting hurdles now behind us. While this project would mean around 188 additional cabinets in San Francisco, this is a lesser number than is needed for the slower copper-delivered U-verse service, so it is a lower impact project.

We are sensitive to the concerns of San Francisco residents, and will seek to minimize the visual and obstructive impact of our planned cabinet deployments. Cabinets will be monitored for graffiti, and we will establish a graffiti reporting hotline for reporting. Any graffiti found will be removed within one weekday.

We will also deliver the best possible service: Fiber-to-the-home, at full Gigabit speeds.

Huge Power Upgrade

We have been working on a large upgrade to our power capacity in our datacenter. The work today was a key step in bringing online our new Mitsubishi 500KVA backup power system. This massive unit is our third UPS, the others are a Leibert 130KVA unit and a Powerware 160KVA.

The system design and the work today was supervised by Russ Irving, our staff power system expert. The work was accomplished without any interruption in service to our datacenter. During the transition, we had a second standby generator and transfer switch wired in to the power and cooling systems via a carefully orchestrated process.

Reblog this post [with Zemanta]

Steaming

It’s cool and wet tonight, the perfect conditions for the creation of steam in our cooling plant. Below are a photos of the two cooling towers putting out steam and mist.

This equipment uses outside air to cool and compress freon, which is then piped inside to large air handlers in the data center. See our cooling system video for an overview.

Each of the cooling towers has 200 tons of cooling capacity, for a total of 400 RT (refrigeration tons) in the system. Either of the two towers can accommodate our current data center cooling requirement, so we have redundant capacity, allowing for failure or maintenance.

What’s 200 tons of cooling capacity mean? From Wikipedia:

The unit ton is used in refrigeration and air conditioning to measure heat absorption. Prior to the introduction of mechanical refrigeration, cooling was accomplished by delivering ice. Installing one ton of refrigeration replaced the daily delivery of one ton of ice.

In North America, a standard ton of refrigeration is 12,000 BTU/h = 200 BTU/min ≈ 3,517 W. This is approximately the power required to melt one short ton (2,000 lb) of ice at 0 °C in 24 hours, thus representing the delivery of 1 ton of ice per day.

So, if I’m doing my math right, with both towers at full capacity we could chill the equivalent of about one point four million watts. That would be an awful lot of trucks full of ice!


Cable lacing introduction

One of the arcane arts of telco is cable lacing with wax string. I’ve been given a crash course by Juston, and I managed to demonstrate on some test cables and ties that the CEO hasn’t lost his knack.

Here are a couple examples of some simple cable lacing used to manage a set of large cables. Yup, I tied these myself. The third image is a cable end itself, with an AMP Champ tool and the connector just after termination.

Check out Wikipedia’s cable lacing article out for more infomation. For additional info, tools and supplies (you can do this at home!), see Tecra Tools.

Going up?

Contractors crane cabinets into SF06

Lifting cabinets into SNFCCA06 CO


You can’t say our team (and contractors) don’t go the extra mile. Or vertical foot. This morning in the Balboa Park area of San Francisco, we installed two cabinets into SNFCCA06. This CO serves roughly from St. Francis Woods through Ingleside to Oceanview plus some portions North of McLaren Park.

This was a tricky CO that was held up because there is no freight elevator, and the staircase is just too tight to get the cabinet up. These seismic rated cabinets weight in at around 300 pounds with the welded frame only, so they are a bit of work to install. Once each cabinet is bolted down, the solid sides and locking doors are installed.

Today, the build-out of Fusion and FlexLink in San Francisco is over half complete. If you would like service in San Francisco for delivery within the next month, you can order Fusion (residential or small biz ADSL2+) or order FlexLink (business ADSL2+, T1 and Ethernet).

New UPS project update

We are in the process of building a third massive UPS for our datacenter in Santa Rosa, and a number of big parts have recently arrived. This project has been underway for over a year now, and is a really large undertaking.

The new custom engineered breaker panel board arrived this week, and we now have most of the components on site. Construction has begun on the physical mounting of the equipment in our power room. We are excited about the new power delivery capacity that this project will provide, allowing for over double our current power load.

If you’re interested in seeing the images in the gallery below, you can click for a medium sized version, then click on the medium one for full size.

Reblog this post [with Zemanta]

Rewinding power costs

Efficient datacenter cooling results

Efficient datacenter cooling results in reduced costs

Our green datacenter cooling system has established a great track record since deployment. Our 2008 total utility costs are projected to come in very near 2006 levels, despite huge growth of equipment in the datacenter.

So, more servers in 2008, but far less power used to cool them. That is green and cost effective! Our growth in power consumption is ongoing, but the trend line has taken a nice step downward due to the investment in efficiency.

For more info on the innovative Bell Products Core4 system at Sonic.net, see this article.

Fusion and FlexLink network build update

Our network build teams are making very good progress on the 19 central offices which we are building out in the first phase. We have had teams in San Francisco, the East Bay, Sacramento and all around Sonoma County over the last month and a half. Here are a few photos of cabinet load-in to a San Francisco central office, plus an update of our hand-crafted build status board.

Cooling is key

As noted in the MOTD last night, we had a brief cooling failure in our Santa Rosa datacenter. This turned out fine. We had staff on site, and we have learned some things that will prevent this particular failure in the future.

For those interested in the technical reason for the failure, during the multiple power transitions from utility to generator and back, the variable frequency drives (VFDs) on the four redundant air handlers sensed an over-voltage condition and shut down to protect themselves. To address this, they have now been re-configured; if they have a failure now, they will wait eighty seconds for power to stabilize and re-start automatically.

The interesting thing though was that this presented an opportunity to see what really happens in a large datacenter without AC for a brief period of time. Total cooling downtime was 15 30 minutes, and during that time, the temperature rose 15 degrees. The room is typically kept at 69 degrees fahrenheit, so this pushed the ambient room temperature to about 85.

Meanwhile, in-cabinet temperatures for cabinets with a lot of equipment in them nearly touched 100 degrees F. That’s just ten to twenty degrees prior to when we expect equipment to begin failing, so this was a close call for us.

Datacenters are challenging environments to design. You need fully physically redundant Internet connections, plus fire suppression, physical and electronic security, power backup and redundant cooling. We’re very pleased with the efficiency of our new AC system and it’s VFDs, and it’s clear how critical it is from this incident.