Tuesday, May 24, 2005

There is currently an initiative being run out of ADA

From Anonymous:

There is currently an initiative being run out of ADA addressing
Information Technology Infrastructure. While there are some very
worthy goals of this effort, notably in the in the area of purchasing
strategy and obsolete equipment replacement, there is the question of
"standardization".

If you've been a user of the Lab's computing infrastructure for any
length of time, you've likely observed the various travails visited on
support personnel and users by viruses and spyware, with exploits
targeted primarily at the Microsoft Windows platform. This situation
is not going to improve.

While other operating systems and environments are certainly not free
of vulnerabilities and exploits, there are fewer specifically targeted
at them.

What does this have to do with ADA's effort? Plenty, because the
outcome is allegedly "predecided" in favor of Windows, regardless of
the effort's results and conclusions. This is a classic example of how
a lot of technical decisions seem to be made at the Lab: decide first,
create a huge, expensive effort to get supporting data, and then make
the data fit the decision.

It is quite ironic that a workplace that has the alleged goal of
secure operations tolerates "lowest common denominator" decisions in
the area of security, especially those that are made autonomously and
capriciously.

We, the authors of this submission, believe that the ADA has not
presented a well-considered plan . We are members of LANL's
computational community, and our assessment is that the ADA's plan
makes no sense from a security point of view, nor from an economic
point of view. We therefore urge the new director and his team of
advisers to place an immediate moratorium on the ADA's plan, and
launch a reevaluation of his proposed initiative.

Comments:
I've long maintained that when they come for my Mac, they'll find my letter of resignation taped to the machine.

Microsoft Windows is utterly unfit for deployment into a technical environment. Security risks notwithstanding (and it's damned hard to set that issue aside, even rhetorically) the lack of good X11 software militates against its use. And yes, I've had to use Reflection X in the past, and no, it's not a good piece of software except in the eyes of someone who considers Windows itself to be pretty good.

I'd go with a general rule of thumb: no person at the Lab who has Microsoft Project installed on a computer gets to have any say whatsoever in the platforms used by the technical staff. Or the clerical staff, the custodial staff, or the people who hang out in Hot Rocks using the WiFi there to find a safe body-piercer in NNM.
 
Probably in an attempt to not seem overly critical of MS Windows, the poster does not state the obvious: Windows, because of its architecture, is far less secure than Unix-based Mac OSX and Linux systems. Sure, all systems have vulnerabilities, but most of Window's vulneribilities are designed in, and trying to keep all the LANL's Windows boxes continuously patched is a fool's errand.
 
They will not touch the macs -- this is about supporting all the different PC boxes. Although I have no doubt that some people would like to control the computing choices, the overwhelming favorite of the technical staff is MAC. I know at least one member of the EB that would kill if Rich Marquez tried to touch his MAC.

On the other hand, the CCN folks can not seem to support Dell vs IBM vs COMPAQ. Probably a good idea to limit those choices.
 
Here's the problem. The computer support staff are stuck in a rock and a hard place trying to get your computers secure. Your mac is wonderful and all, but it's a separate platform that requires time and resources (read: money) to manage correctly. By their reasoning if there is a single computing platform patching, management, policy enforcement, etc. are much more uniform.

Now put down your mac, step away from the keyboard, and there will be no bloodshed.
 
Windows has more than 90% of the systems running in the US. Macintoys, UNIX boxes, and LINUX PCs make up the rest. There are very few scientific and/or engineering applications available on the Mac.

Give it up!
 
Putting uniform computing requirements on the staff at the
Lab is a bad idea. Providing documented, supported computing environments for those that want or need them is good. The computing requirements of this Lab are so diverse that one can't place a single requirement on everyone other than certain security guidelines so that your computer does no harm to others. To turn this into a particular OS requirement is to not properly deal with the problem.
 
Oh no. Computer religious wars. As if LANL didn't have enough problems, management is opening this can of worms.
I would not be the least bit surprised to see the new contractor come in and insist that everyone go to Windows for all official business. No doubt a few Macs would be allowed for those who have a lot of clout. Same with unix or linux desktop systems.
It is a total waste of government funds to support all three. On the other hand, in the current LANL environment, it is impossible to mandate just one.
Rich Marquez must be feeling left out and unloved or he would never have gotten this whole mess started. It has been tried before.
 
Herein lies some of the challenge that distracts from moving forward & getting the work done; people who will not budge and people who will not tolerate those who won't budge when it comes to computing platforms.

This whole issue doesn't seem to warrant 'show stopper' when you apply common sense and logic. If you make everyone use a system or platform that has known risks, you set up the entire computing community to be vulnerable - even if you think you can stay a step ahead of the hackers or other attacks.

Also, if you make everything so homogeneous that there are no expansions where other organizations are headed, you end up being the last one to hear of, know about or use advances that may be made with other platforms. We cannot be lemmings, even lead one, lest we have finding our demise in the sea as our only claim to fame.

It seems that the 'support' concept is slightly bastardized in this setting; support is supposed to enable those doing the work to do so with relative ease, not the other way around. If anyone thinks that supporting the desktop needs of advanced (or basic) science would be a breeze, they need to be headed back for remedial 'may I help you' courses.

It seems that there should be some uniformity where it makes sense, however, there should be a margin of knowledgable and stable support for the needs of those who are not in the mainstream of the desktop users.
 
No uniform computer platforms make business sense- as do uniform programs such as MATLAB or Project or Pro-E. LANL is just too individualistic to comply...
 
This is nothing more than a knee-jerk attempt to answer a DOE auditor's finding that there are too many different computers at LANL. First IA came up with a list of "supported operating systems" which, depending upon the age of a computer, may be impossible to follow. Since IA has no teeth to enforce their OS standards, Marquez, to earn some of his ridiculously high salary, was tasked to lead a standardization effort.

Since any PC will run Windows or Linux, standardizing on a hardware brand is patently ridiculous. Forcing people to purchase one brand of hardware via a JIT contract is detrimental to northern New Mexico businesses who have no motivation to work with various distributors to get good prices for hardware and to be competitive. It's far cheaper to purchase some number of disks, dvd/cd drives, memory, CPU's, fans, motherboards, etc. and build your own system. It takes about half an hour to put one together and another hour or so to install the free Redhat system Fedora Core. Of course it won't be in a fancy case that says Dell on it, but it will probably be more useful because it won't come with expensive SCSI drives and hard-to support video cards.

Suggesting that Windows should be the LANL standard would cost millions of dollars in support. Yet LANL/CCN is putting a tremendous effort into setting up the infrastructure to support them. Everything done on Windows can be done on MAC OS X which has yet to exhibit very many security issues.

The real problem is that no one has bothered to ask why people are using the computers they have, no matter what the age of the computer. And no one will admit that the concept of weeding out "outdated computers" can never be accomplished because there will never be enough money in the budget to replace desktop computers quickly enough not to have some "obsolete" equipment.

Decisions on computer equipment is a group level function because the group staff are the ones who should know what they need. The Director needs to return financial responsibility and decision making to the group leaders and let them and their staff purchase what they need to support LANL.
 
Oh, boy. Another Mac/*nix v. Win flame war, complete with "cold dead fingers" and similar statements. These are fun to watch, but not all that productive. Given the recent emphasis on constructive commenting in this blog, I'll try to focus on logic and math.

The open-source and third-party development community largely ignores Mac/Unix, favoring instead either Windows or LINUX. If the vendor is interested in making money, they'll release for Windows first. Free software usually goes for LINUX first, but responds to popular pressure by putting out a Windows version if the product gains any significance in the community and is not a Java or Web implementation. Apple spends a huge amount of time trying to get the major vendors to produce Mac versions of their products. Enterprise-class products typically come out first for Windows and maybe later for LINUX, but Macs are usually relegated to whatever Web interface the vendor provides. Rich client apps for Macs are rare these days.

While it is true that there are a larger number of exploits on the Windows platform, and that platform is targeted first (bigger target = more potshots), the monitoring and response community is commensurately larger and very sophisticated. Corporate IT departments overwhelmingly prefer to operate in environments that have robust support communities. Now, I'm not saying that there aren't robust support communities for the other platforms. They're just not as huge as the Windows communities. Don't underestimate this comfort factor when an IT manager is making a decision; IBM built (and maintains) their empire on this comfort factor.

A properly patched and administered machine can be built for any of these platforms, and will be safe against all known exploits. All three platforms must be proplerly patched and administered to be safe and compatible with others.

All machines at LANL must be scanned for vulnerabilities because some administrators insist on creating islands of incompetence. Don't argue this point; it's a worldwide fact of life, and a strong argument for automated administration.

Patch distribution and the bulk of administrative tasks can be automated, but the patching and centralized administration systems are complex and require specialized training. That said, once in place and running well, they provide a significantly higher level of security and compatiblity at a measurably lower cost than island-type administration.

The math is simple: 3 platforms = 3x the cost in management systems, scanning complexity, administrative talent, and security risks. Simple math is extremely effective with managers, auditors, and regulators. A complex argument against a conclusion supported by simple math rarely succeeds.

Switching platforms requires new software licenses, retraining, migration, and re-work of current documents. These are very significant costs. At LANL, Windows outnumbers all other platforms combined by a significant multiple. Therefore, if there's switching to be done, the lowest cost switch is to the Windows platform.

However, I don't believe anyone has published the numbers describing average retraining and migration costs for the users of other platforms at LANL. Without that number, it's hard to tell whether it would be cheaper to keep the current model and pay for the increased administrative load, or do the switch. The administrative load is ongoing, the switch is a one-time cost. Once someone comes up with those numbers, the decision will be clearer.

For those of you with a science background, remember that arguments citing specific, isolated examples (ex: my favorite application runs best on my Mac) are treated as data points, rather than experimental results supporting a hypothesis. The arguments in favor of corporate IT departments standardizing on Windows are substantial and backed up by simple math. Statements like "trying to keep all the LANL's Windows boxes continuously patched is a fool's errand" don't carry much weight when thousands of IT departments across the nation do just that every day.
 
To 5/24/2005 09:16:43 AM: Well said; that's just the way corporate decisions are made. Do they apply to us? Not in the past, but maybe they will in the near future.
 
"The math is simple: 3 platforms = 3x the cost in management systems, scanning complexity, administrative talent, and security risks."

This is the same "simple" math that predicted exponential population growth, and fails to account for any of the relevant factors. Many studies have proven that Macs are cheaper to run and maintain than Windows boxes on a unit-for-unit basis, and none of the institutional resources spend chasing down mal-ware are required on the Mac (partly because of design differences -- the Mac requires a password to escalate privileges every time an app wants to do something system global).
 
"No uniform computer platforms make business sense- as do uniform programs such as MATLAB or Project or Pro-E. LANL is just too individualistic to comply..."

On the contrary, 95% of the LANL staff are not computer support people, and want to make platform decisions based on what makes sense from a productivity standpoint (should we force all the group admins to use one-line telephones in order to simplify inventory keeping?), not based on the exclusive concerns of the computer support people.

Computers are one of the most important tools at the Lab. Our strategy should not be dominated by "simplistic" issues. There's no way you can do science at the Lab if all the technologies are mandated and standardized.
 
Let's do business the Rich "I love women" Marquez way.

When DOE has audit findings that Windows is not secure, call Rich "I like starring at your chest" MARQUEZ to put the correct security patches on your machine.

What does ADA know about IT. I mean, Amy Knuclehead was writing IT policy.

Change ADA to ADD !!!
 
Check out CCN division. They just split CCN-2 into three groups CCN-1,2 and 3. All doing desktop support. They are rapidly assimilating all the independent operators. Support in those groups/divisions is going into the toilet in a rapid fashion. Instead of hiring people that have the big picture they insist on hiring people that are specialized in a very narrow area. Then there are all these cookbooks that are always a few version behind and are supposed to help there techs install systems. How backwards is that?
 
This initiative is an example of LANL's
dysfunctional management. In the days of Bradbury and Agnew, there were divisions doing science and departments doing support. The divisions and the director sat at the table making decisions. The departments provided the support for those decisions.

Now the support divisions sit at the same table as the science divisions, and they carry as much weight at the science divisions. This is screwy because they do not have the technical knowledge necessary to participate.

As was mentioned in another post, this was initiated because of a DOE finding. This finding has become important because neither our last two directors nor UC has bothered to say to the DOE, "Yes, our staff use a variety of computers in the course of their work, but we are confident that they know what they are doing. And we believe that your audit is too detailed and infringes on our staff's being able to use what they need to support LANL's mission."

The DOE reminds me of the schoolyard bully. If UC and LANL management would stand up to them, rather than asking "How high?" when the DOE says "Jump", maybe the DOE would have more respect for us.
 
As a support person who would probably be handed his/her head for agreeing with the 5/24/2005 07:37:17 PM post, I'll take the risk just to enjoy a moment of common sense and agree anyway. One of the private sector concepts that hasn't made itself obvious in this arena is a market-driven supply/demand dynamic in the support side of the house. The support side seems to think that they ARE the show rather than being the behind-the-scenes support for the real show - technical work.

It would be amazing if the support and the management teams would set their sights on the most direct path to success and allow any successfully completed projects speak for the Lab's abilities.
 
"There's no way you can do science at the Lab if all the technologies are mandated and standardized."

This point has some validity. When you're doing real research, you need to have the right tools for the job. Very understandable, especially if the scientist doing the choosing has sufficient expertise to validate the need for a specific class of software or computing capability. Not so understandable when the scientist is running a Mac just because they like the platform.

I've seen too many instances of some scientist running an ancient, creaking old computer to support a piece of experimental equipment, and they haven't bothered to do any upgrades because the prehistoric software they're running only works on Mac OS 6, or Win 95 and the vendor has long since moved on to better pastures. Then the computer dies and it's a big disaster because Real Science is being held hostage by a dead computer.

"I'm a scientist, therefore I get to pick my computer" only holds water if you're a real, live, practicing computer scientist. We don't have many of these at the Lab, because the Lab doesn't do much real Computer Science research outside of a few high-performance computing and networking projects.

If you're not doing real research, then bow out of the debate.
 
Fortunately our division runs a top notch shadow organization for support that continually outperforms CCN (often getting their hands slapped for such) and realizes how important it is to cater to the science needs. As a software developer I *MUST* maintain very sophisticated developement environments on both LINUX and WINDOZE. I would be less productive without a local stellar sysadmin. Often we have some team members configure their particular LINUX machine for a particular development flavor and others just SSH to that machine to use that environment. Try that on WINDOZE with multiple developers running concurrently! I'm seriously considering purchasing a MAC since our development software *in theory* is platform independent. Then I could reduce my WINDOZE machine to just another platform port test environment. For many projects at the lab there is no "one size fits all". I have total faith that our shadow organization would stop an such insanity!!!!
 
"It's far cheaper to purchase some number of disks, dvd/cd drives, memory, CPU's, fans, motherboards, etc. and build your own system. It takes about half an hour to put one together and another hour or so to install the free Redhat system Fedora Core."

Yeah, right. Buying individual parts at retail will net you maybe $100 in savings off of Dell's price, or $200 off of HP's price. Then you spend 1.5 hours or so at a LANL loaded cost of $200-$400 per hour (depending on your program codes) building the thing. Net: Loss.

Yeah, I know you say you can start the install and go do something else while it runs, but that's just BS. Too many dialogs popping up with questions during the install. Every tech that does this kind of build winds up babysitting the thing until it's done. Unless, of course, you're transferring an image like the big boys do, but Fedora doesn't come as an image, and an image takes a lot less than an hour to run, so I know you're not doing that.

There are many reasons standardization is popular at large organizations. Avoiding islands of incompetence like this is one of them.
 
To 5/24/2005 11:11:36 PM:
What happened to Patrick Brug and Anthony Stanford with their TIG-based cost-saving standards?
 
CCN has been hosing things up for years under the benign neglect of John Morrison. He understands computer science research, but has no clue how to run a support organization. What starts at the top, percolates down through the management chain. That's why CCN's service levels are so abysmally poor. For an organization that dysfunctional, the Borg approach is just about the only viable survival tactic.

Come the contract switch, either Bechtel, LM, or NG will shoulder the service load. Watch how fast CCN's service functions get outsourced when that happens.
 
"Now the support divisions sit at the same table as the science divisions, and they carry as much weight at the science divisions. This is screwy because they do not have the technical knowledge necessary to participate."

This may have held water back in the days when you could toss waste out the back door and down the canyon with nary a concern for what happens later, or when every computer was an island to itself unless you connected a couple by modem. The regulatory and technical environment is a bit more complex these days.

At LANL, most "science" managers think a software developer is a technician-level position, and have no clue what a DBA does or how much they're worth. Same for construction management, security, environmental regulations, and facilities management. The job of the service organization is tremendously more complex than it was 20 years ago, and having "science" people make decisions about "support" functions can get LANL in trouble just as fast as the reverse.
 
I have been a shadow for 20+ years and have always been proud of being able to provide top notch and responsive support to those I have had the pleasure of working for. CCN support is a slow moving elephant and its getting bigger and slower. I still say that local in house computing teams if properly organized are less expensive and more responsive than a centralized run operation. I agree that there is a need and requirement for centralized infrastructure and systems that support the institutional level stuff but support in the trenches and the flexibility to do whats right for the organization you support should be paramount.
 
True, the conclusions of ADA's IT policy document were predecided by no one other than Rich "I like women" Marquez. The sad fact was that no one on the team who participated in writing that document was a practising IT or computer science specialist. Folks from LANL's computing S&T research side were *deliberately* barred from participating in this study. Marquez and his yes-how-high-should-I jump-sir? underlings, Carolyn Zerkel, Charlotte Lindsey, Kim Mousseau, Amy Knucklehead, Camillo Perez, Beth Gardiner, and others, made sure of this. The saddest thing is that none of these folks will challenge or stand up to Marquez. Why won't they? Are they really that afraid of Marquez or just too stupid to think on their own? The larger question is how can someone in good conscience author a report to fit a set of already established conclusions; this goes against the very nature of scientific research. But then again, not much real science goes on here anymore and none occurs in the useless directorates such as ADA.
 
To 5/25/2005 07:18:51 AM:

Lindsey, Mousseau, Perez, and Gardiner ARE experienced CS/IT specialists. Most of them have masters in the field and average more than 15 years experience, starting from the trenches. You are confused.

Why was S&T excluded from the proceedings? Because the attitude of the S&T reps that bothered to respond to the invitation was "We're special. You don't understand what we do, so write in your document that anything labled S&T can have any computer it wants." This was the opening position and they simply stuck with it until their repetitions of the above message along with the continuous refrain of "unfunded mandate!" irritated everyone else enough that they got voted off the island.

If S&T is so darn smart about enterprise IT, why don't THEY come up with a model and a plan that can stand up to peer review? Because that would be harder than continuously repeating "We're special. Leave us alone."
 
A top level post in this blog states "I’ve had it with a bunch of clueless half-educated bureaucrats trying to impose the same computer standard on scientists and secretary." in support of a decision to start looking for a university position with an intent to leave in a couple of years.

I think that's really the crux of the argument here. The assumption is that scientists will get stuck with cheap, generic Windows boxes suitable for office work and not much more. I don't know about your organization, but I don't see many secretaries out there using systems like this.

A standard software development workstation (even for interns) is pretty far ahead of a secretarial workstation out in the real world. Also, look at the reading they give their interns. I don't see this occuring to many "science" managers. Most of our interns get the machines cast off by the secretaries! And we wonder why our recruiting / retention of students is so low.

If the software you're developing has to run on multiple platforms, you have an excellent reason to have multiple platforms sitting around your offices. If the experiment you're running uses equipment that is controlled by a LINUX or UNIX box from the manufacturer, you have an excellent reason to run the same kind of gear to connect with it and manage it. If the experimental gear you have requires unsecure ftp transfers because it was developed by Brain-Dead Developers, then set up an isolated LAN with no connection to a LANL partition and keep on computing. If your customer requires you to develop using Commodore VICs, go right ahead, but get the requirements in writing.

Just don't expect to be able to connect to the enterprise business systems to do your time entry, shared calendar, manage project progress, manage authorities, manage budgets, attend training, run collaboration software, work with manufacturing management or plant engineering systems or deal with the "service" bureaucracy using these systems. For that, you'll need a Windows box. Because quite frankly, all those other "exceptional circumstances" are just down in the noise when compared to the traffic generated by the day-to-day business activities of a large enterprise.

And no, mandating Web interfaces for everything isn't going to work until all the vendors out there decide to make it work. Since any large vendor is interested in maintaining a proprietary advantage, rich interfaces rule the day. Rich interfaces are usually platform specific, and even when Web-enabled, are usually tied to the most popular browser's capabilities. So you still need a Windows box.

As a manager, expect problems with the non-standard systems to land on your desk first, so build in the budget for shadow admins and techs (that will have to be qualified)to deal with the messes your organization makes. It can be done, and there are a number of good people out there running from the CCN Borg effect to recruit. Just make sure you build a firewall between them and the CCN staff taking care of the enterprise boxes, or you'll get nothing but childish whining, complaining, debating, and finger-pointing from both sides. Consider it an extra cost, in funds, management time, and organizational structure to compensate for your special computing needs.

Make sure you have good reasons to go non-standard, preferably based on customer requirements, and don't extend the effect across your organization until the model doesn't pass the giggle test when viewed from outside. It can be done, but it takes someone with smarts to pull it off.
 
"And no, mandating Web interfaces for everything isn't going to work until all the vendors out there decide to make it work. Since any large vendor is interested in maintaining a proprietary advantage, rich interfaces rule the day."

I'm a software engineer, and I claim that any architecture that is so limited is not ready for prime time. Proper enterprise scale systems need to be factored in such a way that it doesn't matter what "interface" you use to talk to it: web browser, client application, automated system, etc. Any vendor who cannot provide you with standard interoperable interfaces is likely trying to hide a brittle and tightly coupled client/server architecture.

I just rolled out the Monte Carlo 2 system that handles the LDRD proposal management process. We have a core enterprise service that can be attached to with either a web server or (for administrative functions) a Java client application that runs the same on any platform. We pull this off (and with very limited resources). Why can't you?
 
As an addition to Dugs comments the MC2 system is a Mac OS X server running tomcat and mysql as the database. This system replaced the original montecarlo LDRD server that consisted of 2 Windows 2000 servers. Its a solution that cost less, is easier to maintain, expand and provide future additional services with ease.
 
"Its a solution that cost less, is easier to maintain, expand and provide future additional services with ease."

These are the advantages of building standards-based systems, rather than getting locked into a proprietary vendor solution for critical systems. This means I never have to tell my customers, "Just don't expect to be able to connect to the enterprise business systems..." if you're not running the same hardware and software I am. Hard to believe we're still having this debate at LANL in 2005, when every business critical system I deal with in my personal life is platform-independent.
 
I second the statement that server system implementations shouldn't require a particular platform for a client. That is what interoperability is all about. Designing something else is foolish and results in usability problems, increased costs both in delivery and in future systems evolution. This is well documented in computer science and industry. Certainly vendors want lock in, but it shouldn't be accepted by users. I've worked for more than 10 years with international standards bodies to help develop (and implement) standards that don't force a particular OS on a client (or a server). I hope the Lab isn't going down the route that requires Windoze boxes on every desktop. This is a serious error that fails to take into account that computers are to help people get their job done and not get in their way. There is reason for people to use a variety of OS's for a variety of reasons unrelated to particular LANL business apps. To require everyone to have an additional computer on their desktop just to deal with business apps is expensive and cumbersome. Citrix may be considered a solution to this problem, but it is far from it.
 
Yes, Brings up the question as to why the EP project configured Oracle to require Citrix to access the portal? I was told from another Oracle guru that Web access is built-in.
 
It is better also to have one more tier in the architecture so that the client doesn't care about the database in any form. This adds flexibility and ability to deal with change in a more robust manner. Having an enterprise system depend on one database vendor is a real problem and can add enormously to the long term expense.
 
The new director needs to issue a moratorium on ADA and its antics, period.
 
To 5/25 at 7:18 --->

Mousseau: she refused to fix the IM-8 labwide applications so that current Netscape and Mozilla browsers would work. It took over a year and cost Lab employees a lot of wasted time. Remember training on line - one couldn't get the credit. Yeah, she's very competent.

HR IT is under IM Division. Look at how broken the HR web is. Sheena Wasfey is Kim's lap dog and could care less about customer service.


Lindsey: acting CIO. Only at LANL does the CIO not report to the Director. No power, no enforcement, hence NO RESULTS.

And Mousseau and Lindsey have their staff involved with LANL's money pit called EP. Guess we have to give LLNL's NIF a run for its money on how fast we can push dollars down a dark hole.

And Mousseau and Lindsey don't publish and give talks at DOE IT conferences or anywhere else.

yeah, that's the kind of leadership we want. These two couldn't spell IT if they tried.
 
In an ideal world, Oracle would have written its software so that it didn't have compatiblity problems with its jInitiatior (Java library) and non-MS browsers. But it didn't.

In an ideal world, Oracle would have written its software to run on straight DHTML and Javascript, or utilize a small Java applet. But it didn't.

In an ideal world, IBM/Oracle would have stated these limitations up front when doing its proposal for the Lab's ERP project. But it didn't.

So now we're stuck. Business desktop systems standards have been mandated for us by IBM and Oracle. The fact that it makes it less complex to do security and common logins adds weight to the decision.

Face it: It's all over but the crying.
 
Dug said: "Hard to believe we're still having this debate at LANL in 2005, when every business critical system I deal with in my personal life is platform-independent."

Oh, to be young again, when everything is simple and easy. Dug, you're looking at PUBLIC front-ends to major business systems. Not applications; systems. Those public front ends are specially designed and built to be W3C compliant and to function with low-end, straight password-based user access for a very restricted class of users. Even then, they're tweaked to detect which browser is using them and compensate for different "interpretations" of the W3C standards by the different browser makers. That's some expensive code you're looking at. Then go take a look at the size of the firewalls behind those systems, and the standards enforced by the IT departments on the internal users of the systems that interface with that front end.

Try this: Start developing ten years ago, when SAP was just a baby, the Web was immature and Java was just getting off the ground. That's Oracle's world. Now try to extend the same architecture to the open standards of today.

What? Can't do it? Then rewrite all the code supporting 4,500 tables and their constraints, triggers, and relationships. Rewrite all 16,000 forms for your ERP system to abandon your ancient Java library and run using straight DHTML and Javascript. While you're at it, refactor the millions of lines of code to optimize all this.

What? You didn't write everything as totally isolated components communicating through a standard messaging architecture that didn't exist when you started? You avoided total componentization / object orientation because your customers would see a monster performance hit? Fool. Open standards are obviously the way to go!

Dave F: "Not ready for prime time" only holds weight when the definition of "prime time" is relatively stable.
 
I may not have 4500 tables to deal with, but I do have several generations of legacy systems that cannot be thrown away, but there are ways to do this that don't limit oneself to a narrow band of technologies. You say this results in expensive code, but I say that is results in cheap code in the long run. Rather than just execute on an implementation, you define a standard first, and implement on top of the standard. Monte Carlo 2, for example, has multiple layers of standards (it's a four-tier system, in the industry parlance), any of which can be swapped out without affecting the other technologies. If somebody told me that wanted Oracle instead of MySQL, I could move the system in a day. And I have a staff of two.

I worked with Dave F for two years pulling information from diverse hospital information systems. You wanna see esoteric, look at the home-brew and proprietary systems hospitals run. And yet, because we defined standards, we were able to incorporate anything they could throw at us. As is often said, don't try harder, try easier.

If you're wedded to Windows, what will you do when the next technology comes along? In five years, will EP be the clunky legacy apps that EIA is now?
 
"Business desktop systems standards have been mandated for us by IBM and Oracle."

What branch of the government are IBM and Oracle?

"Face it: It's all over but the crying."

I suspect this is true, but not in the way you suggest. Check this out for a description of the 10 year debacle that Stanford's (10 minutes down the street from Oracle) Oracle implementation has been, and how they're backing away.
 
Dug, you just don't get it.

The standards that make your life so easy were not available when Oracle and SAP started developing their ERPs. There were a bunch of research papers on those topics available, but nobody in their right mind writes code to research concepts unless you're doing CS research yourself.

Even now, four-tier architectures can't pass messages fast enough (or secure enough) to handle the transaction load of an enterprise system. Response time goes into the toilet. When you carry abstraction to that level, you introduce a host of problem sources, because abstractions are leaky. Yeah, they're flexible and pretty, but they're "not ready for prime time" outside of small, specialty apps like yours.

What you're essentially doing is saying that your new hybrid car gets better mileage than a 10-year-old sedan, and why doesn't everyone just junk their old stuff and get with the new reality? The only attribute the little hybrid car outperforms the old sedan is in mileage, which is not a good enough reason for everyone to switch.

Enterprise systems (ERP, network management, etc.) are like oil tankers. They don't turn on a dime. They have incredible momentum, and complaining that they don't support modern standards isn't going to change things tomorrow. Maybe three or four years from now we'll see the shift that's probably in architectural design right now, but in four years there will be a bunch of new standards to fascinate you, such as XML-based transaction encryption and authentication.

Years from now, when you complain about some system not being optimally configured because it uses straight SSL and HTTP posts instead of the new authentication and encryption standards, I hope you get a sense of deja vu when you realize there's a couple of trillion dollars worth of code running the old stuff, and complaining that they're not with the program didn't work back then, either.
 
"What branch of the government are IBM and Oracle?"

DOE's standard is Windows. Which is why we have an audit finding. Which is why we have a platform standard effort. Which brings us back to the top of the loop!
 
We develop Linux-based applications for our customers (those that still remain, that is). The day that DOE/LANL mandates Windows boxes on our desks is the day that production/development/maintenance of our codes stops.
 
"Even now, four-tier architectures can't pass messages fast enough (or secure enough) to handle the transaction load of an enterprise system."

You're mixing categories. The number of layers in a system has no direct relationship to performance. Extra layers can improve performance, if the higher level of visibility allows the layer to perform non-local optimization. Everything on the internet rides on top of the seven-tier architecture that is TCP/IP.

Don't think that ERP means some highly streamlined codebase. Do you think Oracle hasn't had to include some 10 layers of abstraction to handle all the special cases they deal with in different organizations?

"When you carry abstraction to that level, you introduce a host of problem sources, because abstractions are leaky."

When you merge all your software into one monolithic block because you believe that only hand-tuned C and assembly language is appropriate for performance, you start to realize that scalability rests on good abstractions.

"Yeah, they're flexible and pretty, but they're "not ready for prime time" outside of small, specialty apps like yours."

And small, specialty systems like airline reservation systems, international banks, and customer service front-ends. We've been doing this stuff in the large in industry for almost 10 years now. The world has turned a few times since single-PC-on-a-desktop was the norm.

"Years from now, when you complain about some system not being optimally configured because it uses straight SSL and HTTP posts instead of the new authentication and encryption standards..."

This is my whole point. I have a system that is blocked out into related business components, so when things like this change, I'm not locked into a monolithic proprietary system that is obsolete the day I roll it out. I have visibility into all layers of the system. There are two completely different client application platforms connected to the underlying server that talk the same standard, and adding a new, esoteric one, is a straightforward matter.

Most of my background is in industry, and I remember these same adventures from 5-10 years ago (government always trails industry in its travails by about the same interval). A large number of corporations have realized that "off the shelf" ERP was just a shell game. They bought a huge bundle of code designed for other organizations, spent as much or more money customizing it as they would have to design proper systems, and on top of it, were forced to cajole their staffs about not being "special" in their operations. They ended up ripping out their ERP implementations and falling back to legacy systems because they just worked.
 
One last point just occurred to me. If "abstractions are leaky," what are the implications for the super-abstraction of "enterprise in a box"?
 
"DOE's standard is Windows. Which is why we have an audit finding. Which is why we have a platform standard effort. Which brings us back to the top of the loop!"

You may think that EP is the center of the solar system, but it's been a long time since corporate computers ceased being tools for running enterprise software. If you don't meet your "customers" where they are, they will simply work around you. As an example, the current T&E system is very picky about who it will allow access, so most of the students, postdocs, etc. email their T&E to their group admins who input it into the system. I fear that EP is going to be a very distant entity to people in many parts of the lab.
 
So the recommendations are:

1) Abandon Oracle ERP

2) Do a custom rewrite of all business systems to:
--) W3C standards
--) 4-tier messaging architecture
--) No platform-specific code

Then we'll have a world-class enterprise system we can be proud of, because anyone will be able to use whatever platform they want to connect to anything.

Like that's gonna happen.
 
Dug said: "They ended up ripping out their ERP implementations and falling back to legacy systems because they just worked."

But our legacy systems don't work. They're not designed for project management integration, for example. AND, they primarily use interfaces based on the IBM3270 character-based terminal. Most of the few GUI front ends as implemented are so finicky they have to run under Citrix so they have a stable environment, and any Web front ends are mostly MSIE-specific.

Abandoning EP to go back to the legacy systems is a non-starter.
 
Dug said: "And small, specialty systems like airline reservation systems, international banks, and customer service front-ends. We've been doing this stuff in the large in industry for almost 10 years now."

No you haven't. The "modern standards" you're crowing about didn't even exist ten years ago, and haven't had robust, stable implementations supported in the languages until the last few years, and some are still immature today.

Only in the last two years has business rule codification settled down enough that models like BPEL and OCL are stabilizing, with only a few brave vendors offering enterprise-class solutions for that tier. And we're still trying to work out decent standards for inter-system authentication and validation at the message level.

There have been a lot of pioneers working on this stuff for the last twenty years. I'll believe onsies-twosies implementations are in the wild, but claiming mainstream implementation is farcical.
 
Dug said: "One last point just occurred to me. If 'abstractions are leaky,' what are the implications for the super-abstraction of 'enterprise in a box'?"

Massively leaky, of course. But significantly less leaky than the bailing-wire-and-bubble-gum approach most enterprises wind up with after choking off the business system funding and driving away the good developers with politics.

A few reasons ERP software is successful:
1) Many customers share the fruits of the hundreds of millions of dollars worth of development effort poured into them over the years.
2) They stick to industry best practices in their business process models.
3) They are insulated from the corporate politics of individual HR departments, CFO's, etc.

The literature for the past few years has been showing that ERP implementations are in the main successful, instead of the inverse as in the past. The key is not to customize the system to your ancient business processes / politics, but to adopt the business processes built into the system. The other key is incremental implementation, rather than Big Bang. When Nanos uttered those words, I knew it was over for EP. All over but the crying, that is. Since Nanos, they've gone back to incremental; it remains to be seen whether or not they can recover.

How I would love to find a startup firm that's committed to building a properly flexible set of business systems from scratch, and has the long-term management committment, the discipline to stick to MBA-style best practices in the business processes, and the millions of dollars necessary to make it happen.

You think the standards are ready for prime time. I think we have a couple more years to go, but I agree that it's going to be the way of the future. Unfortunately, most of us can only do our best with what we've been handed, and we're going to be handed garbage to fix for at least another decade.
 
We've worked with industry developing many of the standards for enterprise applications and ensuring that they can work on a scale larger than a single enterprise. This includes working with all the major companies for over a decade. Multi-tiered systems are not necessarily slower as "dug" has stated. The standards to build a decent ERP capability have been around a long time. The problem we have seen is that industry keeps wanting to reinvent things (mostly to keep up their cash flow) and is not very interested in interoperable standards (particularly inside of an application suite) because that gives too much power to the customer. The customer should be demanding interoperable standards not only externally to the clients but internally in the application suite. It appears to me that this was not done with the ERP project so that we see things like Oracle advertisements showing up on the client side. We have continued to evolve enterprise applications with the standards for more than a decade with only a small incremental investment. With proper separation of components and standards, this isn't that difficult to do. We seem to turn things over to a vendor at the wrong point in development. We have people around the Lab that have actually helped develop the standards that industry has been trying to use. But we choose not to engage them and their experiences within the Lab. We have worked with numerous external organizations in helping them in this process. Vendors have actually used our software to help them understand the standards and what their value is. It would be nice if we could leverage this effort rather than ignore it. But I suspect the investment in the ERP effort (and its multiple reorganizations) is too great to turn back, so we need to live with it. But don't expect to much from it. The recommendations mentioned in 12:23 have been well-known for a long time (around 10 years at least) and could have been followed in the ERP project. My view of integration is to plan for integration at a higher level than initially asked for while implementing integration locally. This forces one to think of standards in a more rigorous way and prepares one for the inevitable changes that come along. Going out and buying an IBM/Oracle product that supposedly solves the problem today does not accomplish this.
 
"But significantly less leaky than the bailing-wire-and-bubble-gum approach most enterprises wind up with after choking off the business system funding and driving away the good developers with politics."

What none of these approaches do is separate specification from implementation. One of the hardest tasks to convince customers of the value of is business process modeling (abstracted from any software implementation), but once I've done that, I can implement an enterprise system on any technology, and I can talk to any other system. I don't get unnecessary bundling or lock-in, and I can react quickly to changes, not like a loaded oil tanker. Technology changes quickly, but business practices are fairly stable, so why not factor your software so the "slip planes" are in the most optimal places?

"The key is not to customize the system to your ancient business processes / politics, but to adopt the business processes built into the system."

By definition, this doesn't meet the customer's requirements because it forces them to take what you're giving them rather than what they asked for. In any other industry, this would be termed as a failure. You shift almost the entire burden of creating the system to the institution. This almost cries out for a light-bulb joke: How many ERP programmers does it take to screw in a lightbulb? One to hold the light bulb and 200 to rotate the building.

When I talk about standards, part of that is developing new standards, rather than letting the standards be an accidental result of the implementation. Monte Carlo 2, for example, is composited together out of hundreds of standalone business logic components, any of which can be yanked out, rearranged, and integrated with external systems. This isn't a product you can buy, it's a philosophy of system design, one that giant software vendors don't have a vested interest in pursuing.

How do you get from LANL's existing legacy systems to next-generation systems without a "big bang?" First, you model the business processes, and then you define and publish standards that cover those business processes (how to specify and fulfill a purchase order, how to request a reimbursement check, etc). Then, you implement those standards as a "wrapper" over your existing systems and adapt all the existing client applications and external interfaces to talk the new specification. Then, you have as much time as you need to migrate the behind-the-scenes implementation without interruption to the end user.
 
Dug said: "How do you get from LANL's existing legacy systems to next-generation systems without a "big bang?" First, you model the business processes, and then you define and publish standards that cover those business processes (how to specify and fulfill a purchase order, how to request a reimbursement check, etc)."

This is all wonderful stuff. Now what do you do when you encounter this situation:

A Subject Matter Expert (SME) for a business sub-process has been executing this sub-process for years using a paper-based system supported by a summary spreadsheet on their desktop machine. Periodically, the SME hand-enters massaged results from the spreadsheet into the legacy system. When you go in to do the analysis, the SME is hostile and uncooperative. Any model you come up with is fraught with errors according to their review, even though you've analyzed their forms and spreadsheet to provide an exact match. No screen you design satisfies the SME, and you find one of the reasons is you can't accomodate unauthenticated erasures and corrections to "final" forms in your security model. Your workflow engine wants something more than "this SME can do whatever they want at any point" and your business rules engine has trouble with "this rule is in effect unless it's not." In short, your modeling tools can't support whiteout.

The SME absolutely refuses to accurately describe how the summary spreadsheet results are massaged prior to entry into the legacy system, and you can't identify any pattern because there are no backups or versions, or historical copies of the spreadsheet to compare with the legacy entries. The SME has no real clue about modern implementations of the same business process, and is unwilling to learn about any other possible approaches. In the academic parlance, the SME is refusing to share their tacit knowledge or acquire new knowledge. By the way, this situation is NOT an outlier; it's more like the middle two standard deviations of LANL's business processes.

For a LANL example, go ahead and try to model the business process for answering a question about a computing security policy, or the qualifications for various Software Engineering positions (DBA, Developer, etc.) at HR. Which is a TSM? SSM? Tech? You won't be able to identify the current model, because nothing concrete is written down. It's all based on (mostly unqualified) opinion. In other words, the SMEs get to do what they want, when they want, and they seriously don't want to be roped into a defined process that someone else can understand, much less automate. That, in most of SME's minds, makes them redundant, and thus replaceable. Fear is the killer of business process definition efforts, much less business process improvement efforts.

I've seen this in scientists, technicians, specialists and managers alike. It's hard to get over at the individual customer level, which is why it's so hard to improve or alter the business processes of an organization in response to changes in their environment, which is what is going on at LANL.

"First, you model the business processes..." Is a Hard problem, in the computational sense. What you're recommending takes 'way more than $40 million and 5 years for all the basic processes of an enterprise. Which is why the term Best Practices is so often heard in management circles today. Managers are tired of hearing the myriad reasons why a particular business subprocess can't be measured, much less controlled, shared, or altered. They can't demote or remove the resistant people, because they've been earning good performance reviews for years, but these people have so many fingers plugging so many leaks that things are coming to a standstill.

So what's a manager to do? The current path leads to failure, the people won't change voluntarily, and you can't just dump the people and start over.

About 20 years ago, someone had the bright idea of getting a bunch of genuine, acknowledged experts in a room and coming up with a standard set of business processes. After all, accounting is based on GAAP for the most part. HR policies have roots in law. Plant management is based on sound engineering principles. Manufacturing and inventory management have solid theoretical grounds. All of this is well-documented in the academic literature.

Thus, the idea of Best Practices was born. Codification of those practices into systems resulted in the birth of SAP, PeopleSoft, and other precursors to the modern ERP model. The late '80s through the mid '90s was a turbulent time, with more failures than successes until the keys mentioned previously were found. Now a manager can order a wholesale replacement of existing business processes with a new, standardized system and not face the problems of staff resistance directly, because it's not a personal thing anymore. Everyone gets to blame and grouse about the new system. This is fine as far as the manager is concerned, because the end result is unlikely to change unless the implementing management is forced out. If the staff is unwilling or unable to learn the new system, that's a good enough reason to find them a different, simpler position. It's easier to find people with skills in using a particular ERP system, and the learning curve for new hires is shorter because there's less tacit knowledge to transfer.

It took nearly 20 years, and it seems counterintuitive to the "do it the way the customer wants" model, but it works. The fact that it works has been emprically validated. Remember; "the customer" you speak of is not the staffer executing the business process, it's the manager charged with responsibility for the results of the business.

No small business in their right mind starts building a spreadsheet or hires a developer to build them an accounting system. They either outsource it completely or just buy a package off the shelf and send a check for the payroll update subscription. Hand-rolled stuff was predominantly associated with large enterprises, until enough MBAs worked their way up the management ranks that they outnumbered the "good old boys" and collectively realized that most of the processes are the same, just with different terms. MBAs have conferences, too.

That's why most of the "switches" in ERP systems are in place. Is your term different? No problem. Change a label and we're done. Got an extra term you just can't live without? Add a flex field. The key to success is to stay within the 80/20 rule.

You want to build out an ERP system based on modern software architectural standards? Start making the rounds of the venture capitalists. Just be sure to base your case on something other than being compatible with less than 5% of the user base. Focus on agility and flexibility, and find some fatal flaw in your competition that's demonstrable in a one-slide sound bite, or you'll get laughed out of the conference room.

Remember, once the Standards meme permeates an organization, it tends to extend from business processes to other things, like desktops, security, and network management. That's what's happening at LANL. The momentum starts with empirical studies in the academic literature, is reinforced with targeted studies by Meta and Gartner and IBM, and arrives in management conferences, in management magazines, and in management training courses with a simple message: Standardization of business processes on industry Best Practices results in significant improvements in operational effectiveness and organizational efficiency. Faster, better, cheaper. And for the first time, measurable.

Incrementally lowering the efficiency of a few scientists by requiring them to to use EP applications through Citrix Desktop on Demand if they refuse to have a PC in their office is not a sufficiently weighty argument to counter this juggernaut.
 
This interchange has been very interesting, but I've sort of lost track on what it has to do with the ADA trying to shove MicroSoft down all of our throats.
 
"The SME absolutely refuses to accurately describe how the summary spreadsheet results are massaged prior to entry into the legacy system, and you can't identify any pattern because there are no backups or versions, or historical copies of the spreadsheet to compare with the legacy entries."

Admittedly, this is one of those areas where a software architect's job becomes half psychology, which is something the average computer geek is less well adapted to than average. Your SME is probably not being resistant just to be stubborn. The value of this technology may be perfectly obvious to you, but it's not a proven fact to them, the customer/user. If you want their cooperation, you need to prove to them that you can solve their problems. Most of the people at LANL get whipsawed by policy and process changes, and they've (unfortunately) learned to fight any changes that come through. (Nanos has made things worse by trying to brute force changes without winning "buy in.")

I deal with people who are afraid of my systems eliminating their jobs, or adding to their workload in the form of retraining and relearning. So, I start by addressing the difficulties they're having with their current processes, and show them how I can help them with their daily workload. I often have to do things incrementally, planning for ideal security and correctness, but accommodating to the realities of the now (with lots of auditing so that I at least know where the ad hoc activities took place.) People don't want to hear a lecture from Mt. Sinai about how they should just "get with it," and they certainly don't want a pointy-headed intellectual telling them about abstract "best practices" that some outside vendor is trying to sell you. I can tell EP is in trouble here already by the comments like, "This is our platform. You'll just adapt." This is the fully-ripe fruit of a lack of humility early on in the process. What happens when DOE passes down an odd regulation that doesn't fit with what the conventions of MBAs think is the right thing to do? Do you think they'll be pleased with what Oracle and IBM mandate?

"It took nearly 20 years, and it seems counterintuitive to the "do it the way the customer wants" model, but it works."

That's only because you divorce yourself from what happens after you turn the system on. You claim you can't get cooperation from the SMEs, but then assume you're going to stick a system in front of them that they had no participation in and, by fiat, make it part of the business. Experience in countless large corporations has shown that this unfolds like a Y2K catastrophe: nobody can figure out how to use the new systems, the business suddenly grinds to a halt until the users fall back to manual systems, and all the while, the programmers are claiming success because they're doing what some consortium of academics and vendors says is right.

What would make a good ERP system, a workable one that wouldn't leave an endless trail of failed and abandoned implementations? The same thing that makes XML, ssh, ftp, http, TCP/IP, Java, Unix, etc. such enduring technologies: standards-based implementations based on accessible specifications. The electrical engineering field recognized a hundred years ago the need for standard composable parts that anyone could learn how to use, not huge black box systems controlled by the vendor. Could you imaging anyone tolerating a Ford car that only took Ford gasoline, or a Motorola cell phone that only called other Motorola phones? That we're rolling out black box ERP systems only shows how software engineering is still a discipline in its infancy. Engineering rigor will come, but it will take the continued vigilance of the entire industry to make it happen.
 
"This interchange has been very interesting, but I've sort of lost track on what it has to do with the ADA trying to shove MicroSoft down all of our throats."

The discussion has become rather technical, and I can see how it would look irrelevant to the topic, but I think it all boils down to the difference between assembling enterprise systems out of reusable building blocks and stove-piping a Rube Goldberg machine designed for a non-matrixed for-profit into LANL's business processes. The first approach gives you just what you need: you take the pieces you want, and leave the rest. The second forces you to carry a lot of baggage from the vendor: out-dated platform assumptions, bundled technologies, and awkward fixes to allow very different organizations to use the "same" systems.

Some of the EP people want you to believe that this sort of technology can only be implemented on Windows, but platform dependencies only occur when system designers fail to properly separate the platform aspects from the business aspects. There is no inherent difficulty implementing enterprise-scale systems that lack platform dependencies.
 
Well, this is now a repetitious loop. One camp says utopia can be had, just turn over all the software development to a really smart, forward-looking, advanced development team with phenomenal business process improvement skills. The other camp says resistance is futile; the Borg has arrived.

Time will tell. The key indicator will be the presence or absence of a countering plan to the ADA standardization effort. If the science staff truly wants utopia, it better pony up the expertise and a realistic plan to make it happen, or else the Borg will win by default. No one on the IM/EP side is going to help you.

In other words: Put up or shut up.
 
"Well, this is now a repetitious loop. One camp says utopia can be had, just turn over all the software development to a really smart, forward-looking, advanced development team with phenomenal business process improvement skills. The other camp says resistance is futile; the Borg has arrived."

That's two descriptions of one camp: the monolithic/exclusive approach. The ERP folks are claiming "phenomenal business process improvement skills" and that anyone who can't keep up with them should be moved to "a different, simpler position." They're also claiming to be the approaching borg. (I suspect we're headed for the proverbial irresistable force/immovable object showdown.)

I'm saying you have to meet people where they are. They know how to do their jobs. Work with what you've got and steer people in the direction of improvement. Organizations and people are slow to change, but software should be quick. If you have this backwards, you fail.

"The key indicator will be the presence or absence of a countering plan to the ADA standardization effort."

The answer to a bloated Five Year Plan is not another flavor of the same. When the dinosaurs fell, there were lots of nimble mammals waiting to take their place.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?