Month: August 2008

Close shave

Negative Exposure Assessment Drilling Protocol

Negative Exposure Assessment Drilling Protocol Training

I wonder what the people at the grocery store thought when staff member Matt Merner purchased more than twenty cans of shaving cream. I suspect they guessed he was planning some sort of late night pranks.

That would be an incorrect assumption. Matt was actually preparing for our upcoming telco central office installations.

To prepare for the deployment of our equipment in eighteen additional new telco central offices, a group of our staff was trained in negative exposure assessment procedures. This is a mandatory process for our handling of floor tiles which contain or which are presumed to contain asbestos.

Many telco central offices were built before the hazards of asbestos were well known. As a result, the buildings contain insulation, fireproofing, pipe coverings, conduits and other products which may contain asbestos. The item we deal with is the floor tiles, which we drill through in order to mount our equipment cabinets on the concrete floor.

The solution: Better buy Barbasol!

Staff member Clay Carley is our resident expert, an officially certified “competent person”. Asbestos handling is obviously heavily regulated (learn more here), and Clay’s been through the required training for this task. Clay trains the staff members who will be working with or near the process. (This includes curious CEOs.)

The basic process involves lots of wet paper towels, shaving cream and a few Ziplock baggies. The shaving cream forms a barrier, capturing asbestos fibers which may be released by the drilling process. Really high tech.

Construction in Sebastopol is scheduled to begin next week. To keep the floor smooth and soft, we’ve opted for Barbasol with Aloe.

Reblog this post [with Zemanta]

The San Francisco Manager Tools Conference.

Kelsey and I have become big Manager Tools fans, and as such we jumped at the opportunity to attend the Manager Tools Effective Manager Conference when it came to San Francisco.

Kelsey Cummings

Kelsey Cummings

Augie Schwer

Augie Schwer

We learned a lot and met a bunch of great people; I would recommend it to any Managers or aspiring Managers who want to learn more about being an effective Manager.

Slaughtering the hogs

How much Internet is too much? Apparently it’s 250 gigabytes, enough Internet content to fill up a $55 hard disk drive.

Comcast made news today by announcing a usage cap for Internet users. You can read more about it at PC Magazine. See also the DSLReports coverage.

The reason for the cap isn’t economical, it’s technical. In a shared physical topology, there must be management of usage in order to prevent performance problems due to congestion. Cable networks today are Hybrid Fiber Coax (HFC) networks, where the video and data is carried on fiber to distribution nodes which serve 500 to 2000 homes. All of these homes are on a common coaxial cable network, and share the capacity of the network. An image is worth a thousand words, so please view a simple HFC network diagram now.

This is a bit before my time, but a cable network is like a telephone party line. Common until around the 1940s, shared party line telephone service was how most homes received telephone service. It was cost effective because they didn’t have to run wire from every home back to the central office. Instead, it ran from house to house, so the circuit was shared. Telephone companies abandoned party line configurations over fifty years ago, and this has given them/us a big edge over cable.

For the last few years, Comcast has managed heavy usage by warning customers who used “too much”, without defining what too much was. This practice has been called an “invisible cap”. My guess is that this invisible cap is actually more effective than one which is well defined and documented. If you know how much you’re allowed to use, it’s possible to use bandwidth monitoring software to run up to but not over the limit. When the limit was unknown, users simply lived in fear and would presumably curtail their usage. Notably, users in locations where Comcast was the only broadband option would really be motivated to avoid getting the boot, as they’d have nowhere else to go for broadband Internet.

Of course, telco’s response was to point this weakness out in the ads many of us fondly remember. PacBell was criticized by cable companies who claimed they really didn’t have a problem. In fact they do, and the issue then and now is the same, and caps are the only real solution. (see that diagram again, and think “shared network”. There’s only so much to go around between the 500-2000 homes.)

The other solution they tried was filtering peer to peer traffic. They got in trouble with the FCC for this, and the new openly documented and disclosed cap is the result. Also notable, the illegal filtering of P2P traffic by Comcast is what really kicked the net neutrality cause into high gear. The folks at Save the Internet are continuing to fight for uniform and unfettered Internet access.

So, sit back and relax, watch the old PacBell ads and enjoy the trip down memory lane. Comcast has finally stuck the pigs, and they sure are squealing!

Reblog this post [with Zemanta]

Punt!

Sonic.net and BroadLink staff re-engineer network

Sonic.net and BroadLink staff re-engineer BL network

As mentioned in the status blog (formerly system MOTD), BroadLink Communications had a system failure in a critical component of their network today. A hard drive in their core router failed, and while they do have spare parts, they were unable to rebuild it.

BroadLink’s wireless towers in Santa Rosa continue to serve a small number of customers, many of whom still cannot obtain DSL broadband service due to their locations. While BroadLink is a business in decline, it does provide a very valuable service to some under served locations.

Designed over ten years ago, the network is based upon 802.11 wireless access points which serve intelligent Linux CPE at customer premises. The network’s core converts bridged Ethernet to ATM PVCs, one for each customer. (A PVC is a permanent virtual circuit – think of it as a unique customer’s connection inside a larger pipe.) The ATM is fed via a T3 to the Sonic.net network, where we provide IP routing to the Internet. Provisioning and automated management of the per customer configurations is all via the ATM PVCs. This system was designed prior to the viability of more modern solutions like dot1q vlan tagging, PPPoE or MAC RADIUS auth, which could provide similar functionality. The ATM PVC configuration was a very innovative and tidy solution, and if anything, it was before it’s time.

So, in summary we’ve got Ethernet on the WAN (the wireless), which feeds into a magic one of a kind box (The “Red-C”) that converts it into ATM and PVCs. It’s handed off to Sonic.net, where we terminate the ATM into a Redback SMS, basically in the same way that a DSL customer is provisioned. In this way, the hundreds of wireless customers are managed in our systems in the same way that nearly 50,000 DSL loops are managed. Out the other side: Ethernet. Hmm. Basically exactly what went in, but with per-customer provisioning, locking, diagnostics and management in the middle.

The Red-C died – hard drive failure. So, we pulled it and the Redback out, and are simply routing BroadLink’s entire bridged network to the Internet. See the pic for the combined Sonic.net and BroadLink response team working on these changes to the network this afternoon.

This change is transparent to the end-user, but not really sustainable for the long term due to the inability to provision, diagnose, lock, disconnect and manage individual end-users. But, everyone is online, and we will address the bigger picture another day.

The photo includes, top to bottom, Jason Kane, Sonic.net wireless product manager, Tim McAllister, BroadLink board member, Nathan Patrick, Sonic.net network architect, Scott Woods, former BroadLink engineer (Thanks Scott!) and Josh, BroadLink’s technician. Unfortunately, CEO Warren Linney is at Burning Man and is unreachable. Warren, I hope you remembered your googles, water, sunscreen, and a spare cloned backup hard disk drive!

Reblog this post [with Zemanta]

Critical systems: Power backup

Crossed wires shorting out, Troy, Illinois. Af...Image via Wikipedia

Obviously, an ISP doesn’t function without electricity, so we’ve got big investments in redundant power here.

A datacenter power system consists of multiple inputs which are arbitrated by a transfer switch, and multiple loads such as UPS systems and air conditioners (CRACs).

The primary input is PG&E, and the transfer switch monitors the quality of this input. If the utility power goes offline or fades, the transfer switch sends a signal to the starter on the generator, which powers up automatically. Once the generator power output is online and stable, which typically takes twenty to thirty seconds, the transfer switch physically swings a huge set of contacts over to the new input, transferring the load.

The UPS systems and their batteries carry the datacenter computing load during this startup and transfer, while CRAC loads are dropped during the transition. A datacenter can’t function for long without cooling, so the entire generator and transfer switch system must function as designed in order to stay online.

The generator itself is the really cool bit of this whole setup. For those who are into engines, it’s a 24 liter V-12 Detroit Diesel, with twin turbochargers. That’s a full two liters of per cylinder – imagine a piston and cylinder the size of a 2 liter soda bottle. Now, gang up twelve of them. It’s a huge engine. At full throttle it generates over one thousand horsepower, and three quarters of a megawatt of power.

In our five years at our Apollo Way location, the generator has only been called on to respond to a power outage twice. PG&E has done a great job for us, delivering quite reliable power. But, we still must test fire the generator once every week, top up it’s fuel every few months, and trade out old fuel for new periodically. It’s full generating capacity is totally load tested every few years by hooking it up to a massive resistor/heater bank. The maintenance and load testing is critical to assure that the power will be there when we do need it.

Reblog this post [with Zemanta]

When Perl and RPM don’t get along.

Sometimes when building rpm packages you will get an rpm that requires a file that it already contains. This seems pretty lame (which it is) but here is an example and a workaround.

Building a package for freepbx we see this output:

rpmbuild -ba freepbx.spec
--------snip-----------------
Provides: config(freepbx) = 2.4.0-0
Requires(interp): /bin/sh /bin/sh
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(post): /bin/sh Requires(postun): /bin/sh Requires: /bin/bash /usr/bin/env /usr/bin/perl /usr/bin/php config(freepbx) = 2.4.0-0 perl(DBI) perl(FindBin) perl(retrieve_parse_amportal_conf.pl)

The file we don’t want is: “perl(retrieve_parse_amportal_conf.pl)”

What rpmbuild does is go through the list of files and run “ldd” against all executables to find required libraries.
It also goes through each perl file and looks for “use / require” flags to pull out required perl modules.
When a developer does a legitimate thing like ‘require “retrieve_parse_amportal_conf.pl“‘ to include functions and such into their program, rpmbuild sees that this file is needed adds it too it’s list of required files.

Now rpmbuild also goes through and looks for what packages/files perl programs provide. It does this by scanning through the files and looking for “package“. If you just have an include file with functions, you don’t have a complete module and won’t have the package statement either. Thus rpmbuild will never see your file provides itself! Fortunatly there are a couple work arounds.

In your perl file that rpmbuild is requiring you can define ‘ our $RPM_Provides = “yourfilename.pl” ‘. rpmbuild will pick this up and happily add it to the provided file list. The other method is slightly more complicated but works well if you don’t want to patch the source code.

In your rpm spec file under the %prep section after the %setup add the following code:

cat << EOF > %{name}-req
#!/bin/sh
%{__perl_requires} $* |
sed -e '/perl(yourperlfile.pl)/d'
EOF
%define __perl_requires %{_builddir}/%{name}-%{version}/%{name}-req
chmod 755 %{__perl_requires}

Where yourperlfile.pl is the file you want to exclude from the rpm requires check.
This should make your rpm build hapily and exlude that file from the requires check.

If you want to see the actuall files rpmbuild runs take a look at:
/usr/lib/rpm/perl.prov
and
/usr/lib/rpm/perl.req

It’s dead Jim…

Our dishwasher at Sonic.net died; the “heat dry” got stuck on and melted some of the dishes.

The dishwasher melted our cup; Mr. Hasselhoff was not pleased.

The dishwasher melted our cup; Mr. Hasselhoff was not pleased.

While a replacement was on order some people forgot how to wash a dish “manually”; thankfully Tony (our resident Web Design Guru) was on hand (pun intended) to help out.

When the replacement arrived Juston (our resident Facilities Guru) went to work.

Things did not go as planned at first; this made for a cranky Juston.

Juston eventually pushed on and now our shiny, new, super energy efficient dishwasher is purring away; this makes for a happy Juston.

The moral of this story is that sometimes things break at Sonic.net and when they do people pitch in to help, do extra work, and keep working until the problem is fixed.

However… sometimes we still forget the difference between dish soap vs. dishwasher detergent:

Next generation product pricing

Fusion Bundle Logo Concept

Fusion Bundle Logo Concept

As discussed previously, we have been working for some time toward the launch of new next generation products. As we get nearer deployment, some of the details are firming up.

First, bundling. This is a hot topic – some customers really like bundles, and some really do not. We believe in providing as many options as we can, so our next generation products will be available both with and without other services bundled. Of note, you do NOT need to have a voice telephone service for these products, and in fact at this point our initial offering does not include voice. The voice offering is likely to arrive sometime late this year.

Second, a name. Our current tentative name for the family of products is “Fusion”. Maybe that’s “Sonic.net Fusion Broadband Internet”, or “Fusion: Next Generation Products”, etc – it’s a working concept at this point. The Fusion concept encompasses all of the products that will be available. If we stick with this name, the product is likely to have an atom logo, where each electron in orbit represents an additional bundled product. Opt for broadband only, or add in voice, TV, or mobile. Each adds a ring to the orbit.

Finally and most important, pricing. Here are the initial launch products and prices. Note that these are standalone, delivered on a dedicated copper pair, so unlike today’s DSL, you don’t need to have a voice line and associated costs.

(Note, pricing updated and current as of 8/26/2009, reductions noted with strikethrough. -DJ)

Residential locations, dynamic IP:

  • 1.5Mbps/1Mbps $35/mo
  • 3Mbps/1Mbps $40/mo
  • 6Mbps/1Mbps $45/mo
  • 10 12Mbps/1Mbps $65/mo $50/mo
  • 18Mbps/1Mbps $80/mo $55/mo

Residential locations, 8 static IPs:

  • 1.5Mbps/1Mbps $55/mo
  • 3Mbps/1Mbps $60/mo
  • 6Mbps/1Mbps $70/mo
  • 10 12Mbps/1Mbps $90/mo $75/mo
  • 18Mbps/1Mbps $105/mo $80/mo

Business locations, dynamic IP:

  • 1.5Mbps/1Mbps $45/mo
  • 3Mbps/1Mbps $50/mo
  • 6Mbps/1Mbps $70/mo $60/mo
  • 10 12Mbps/1Mbps $90/mo $70/mo
  • 18Mbps/1Mbps $105/mo $80/mo

Business locations, 8 static IPs:

  • 1.5Mbps/1Mbps $55/mo
  • 3Mbps/1Mbps $60/mo
  • 6Mbps/1Mbps $80/mo $75/mo
  • 10 12Mbps/1Mbps $100/mo $85/mo
  • 18Mbps/1Mbps $115/mo $100/mo

Bundling offers the opportunity to drive costs downward – for example, adding voice service (when available) reduces the monthly cost of both products by a combined total of $20/mo. Adding television saves another $10/mo. At this time, bundle savings for adding mobile have not been set.

Product speeds are tiered based upon the capabilities of the loop itself. So for example, the max downstream speed of the 6/1Mbps product is between 4-6Mbps, the 10/1 between 7-10Mbps, 18/1, 11-18Mbps. Maximum speed is based upon the line’s electrical capability to carry ADSL2+ data. This rate of speed will be faster than legacy ADSL1 would be for the same CO based loop.

For customers near downtown Santa Rosa, these products will be available in just a couple weeks. About ten additional cities plus expanded Santa Rosa coverage will arrive in the coming months.

Oh, and yes, the free clip art atom that I’ve used here has one too many electrons in orbit. The max would be four. Broadband, voice, TV and mobile.