Thursday, January 3, 2008

Eras of MRP - to -> LEAN

Executive Summary: (Rev. 18, Feb. 2011) This is the beginning preparation of a three fold response to the “MRP Versus LEAN” sound-bite blogs becoming prevalent on the web. It will dwell on the historical eras leading up to packaged MRP & ERP software, and its initial support of accounting oriented make-to-stock (MTS) migrations, now headed toward make-to-order (MTO) Lean Mass-Customization evolvement w/Flow Focused-Factories. It will then include recommendations and prognostications of where Lean/Flow begins to come into play with manufacturers utilizing some remaining forms of requirements planning software packages integrated with product "Configuration-to-Order" functionality.


These entries will bullet a number of time-lined events over the previous 60-some years, and prognostications about simplified Lean/Flow real-time sub-systems which will be covered in much more detail in subsequent blogs. Since most of these recollections are personal from a 79-year-old semi-retired Founder Emeritus, I would like to elicit comments, re-interpretations, and corrections from the many professional friends exposed to these events during their influences over this body of work. Many thanks, Gene Thomas, Founder Emeritus Eras of MRP – to ? LEAN (w/Flow) (Historically how we got this far, and where it’s headed)

(A) Historical & personal event bullets for further blog discussions:

  1. First mechanization--punched cards or sequential tape: In the early 50’s the only ‘mechanization’ generally available to manufacturers was either punched card processing or very large scale tape oriented computers (no disks yet!). In either case, it was a sequentially processed procedure, punching up BOM decks or maintaining tape files from punched cards. Processing involved copying (reproduction of punched card BOM decks), sorting and summarizing level-by-level explosions. There were often excessive numbers of levels designed to delineate componentry at multiple identifiable and inventorial levels. Then, in the mid 50’s as computing began to appear, the structuring and maintenance of single-level punched card BOM deck’s were often supported by the various assembly and fab drawing componentry part-numbering schemes. These were normally stored in tub files and manually pulled to support the reproduction and sorting to summarize each of the time bucketed componentry requirements. A sequencing technique was often used to insure that no cards were missing in the BOM and routing decks by utilizing a collator to compare preceding and succeeding sequence numbers.

  2. Shop paper ditto-mats & ledger cards: Before the 50’s most manufacturers were manually supporting their shop work order paper-work with graphical duplication procedures (ditto-mats) for bills and routings along with drawing copies. Part-numbered nomenclatures were rampant with all kinds of P/N significance schemes, (such as a drawing size prefix & revision level suffix). Inventory balances-on-hand (BOH) were mostly posted to ledger cards, sometimes with both sold/released allocations and stock room transfers to WIP from fab completions and purchases. In a locally managed Motorola plant in Rome, Italy, they actually had a manual posting station manned at the end of each stock room aisle, but also punched card standard costs to four decimal positions in Lira! Manually built-up standard costs were then used to ‘charge-off’ the dollar values from WIP to finished goods to summarize cost-of-sales (COS).

  3. IBM's computer-intensive leadership: In the mid ‘50’s manufacturing education and training classes for IBM customers at Endicott, NY were held in the training center covering requirements planning techniques for IBM’ers and customers utilizing unit record equipment and large sequential tape oriented computers at the IBM plants. There was always much contention from IBM customers that the computer costs were subsidized internally, and generally out of reach of normal manufacturers! I remember a couple instructors (but I wish I could remember their names) explaining that the volume of componentry generated by an explosion process was a “google-plex”-- a google to the google power!

  4. BOM shortcuts: During the later ‘50’s requirements planning utilized punched card reproductions of inventory BOH summarized ledger cards and exploded BOM quick-decks (pre-summarized for each model part number across all levels of a product BOM) for each time bucket of the future planning horizon, normally monthly. Early Deere plants attempted to maintain a “100-machine load” quick-deck (pre-summarized by component part number for usage at all levels for 100 of the top-level model or family) as a short cut, but suffered a major problem with inaccurate maintenance changes from the single-level engineering drawings. Separate cost accounting “build-up & charge-off” decks (the term “back-flush” hadn’t been commissioned yet!) were even in worse shape, since they were maintained by accounting personnel who were not very communicative with engineering changes.

  5. BOM explosion techniques: Then, by upgrading computers to single-level sequential magnetic tape systems storing the BOM’s, they were used for level-by-level explosions requiring a complete pass of all the BOM’s for each level and a subsequent merging of the inventory netting files by low-level-code. Several technical improvements were developed for the IBM plants to segment the inventory netting tapes by low-level-code to reduce the passing time, but it then also required newly maintained codes to be regenerated prior to any requirements planning explosion cycle. I understood that the Collins Radio Co. in Cedar Rapids, Ia. installed 7-7070 tape systems (for separate processing runs for each level) in-line to try to continuously process requirements changes, the first inkling of an interest in simulating a net-change process!

  6. Bottom-up quick-deck explosions: Maytag, in Newton, Ia, pioneered the use of a 650 tape-drum system in the late '50's where the limited assembly model schedule (provided by Joe Dorzweiler, PC Mgr) was capable of being stored on the 2000 word drum (like a revolving hard drive which also contained the programming instructions). It projected the future time-series of several months of weekly bucketed schedules. However, the BOM’s, again, had to be maintained on tape in inventoried component part number sequence in a bottom-up summarized where-used ‘quick-deck’ format, thus, again, requiring excruciatingly cumbersome maintenance.

  7. BOM representations: As could be observed, a plethora of computerized BOM’s were being utilized in the late ‘50’s for a myriad of silo’d organizational interests: (1) single-level drawings; (2) summarized top-down quick-deck’s; (3) single-level; and (4) summarized ‘bottom-up’ where-used sortations; (5) indented explosions; (6) and implosions; plus (7) special use explosion costing BOM’s; with (8) single-level labor routings. The aerospace and DOD oriented engineering/manufacturers were even more complicated, having to foster a project orientation and traceability functionality to the BOM-Routing maintenance requirements.

  8. Introduction of random-access disc processing w/where-used: About this time, some of my Deere plant customers were overloading the medium sized 650 tape-drum systems and requesting larger drums to cover more models and a MPS (Master Production Schedule) for their growing families. But with the announcement of the IBM 305 & 650 RAMAC in the late ‘50’s, I initiated the idea of a disc oriented packaged software program “BONG” (Bill-of-Material Generator) to maintain the where-used quick deck tape files as a by-product from maintenance of the conventional single level engineering drawing documentation. This package could then be generalized to cover most of the perceived engineering interests, including a simultaneously maintained where-used feature (the first manufacturing relational data base other than that used for straight sequentially part numbered inventory).

  9. BOMP development basis for standardization of manufacturing applications: I was then assigned to an internal IBM plant sales oversight function (IBM sales divisions were trying to help our plant’s implement far reaching applications for effective customer demonstration—“how do you use your own equipment?”), which also involved assistance to our Midwest sales force and manufacturing customers. Over the next 3 years from ’60 at the Rochester, MN IBM plant, I initiated and participated in four major manufacturing oriented projects: (1) Invented and published the IBM internal TIE (Technical Information Exchange Project #3319, September ’61 for “BOMP” Bill-of-Material Processor) specifications as a basis for a manufacturing application packaged program; (2) Recruited John Schleier from John Deere as MIS Mgr. and initiated the Rochester Plant’s Control Center centralized dispatching implementation with punched card data collection and sequenced authorization of operation queues; (3) and initiated the Plant’s “Dock-to-Stock” on-line receiving, warehouse locating, shortage filling, and picking systems, both subsequently documented as marketing brochures; and (4) Assisted the implementation of Joe Orlicky’s design of the first net-change MRP system at the J.I. Case, Racine Claussen works.

  10. Milwaukee IBM development of BOMP: Would you believe, it took me over two years (carrying around my ‘worm chart’ BONG data base specifications) to convince the IBM Development management, along with several customer sites, and our Milwaukee sales office staff, that we should begin the BOMP programming development managed from the field sales staff? Everyone thought that the engineering function was much too complicated to be standardized across market place product lines. Things finally came together with my assignment as the Mfg. Sales Mgr in Milwaukee, where we had the resources under our own command. We were then assisted with the enthusiastic customer support offered in the early ‘60’s by Allis Chalmers (Doc Rue-MIS & Bob Blair-later w/Cummins Engine), Cutler-Hammer (Oscar Reak-Pres), Milwaukee Faucets (Sandy Kohn,VP-Mfg), and Vollrath (Terry Kohler, to be Pres). JI Case-Claussen Works in Racine (Joe Orlicky-PC Mgr, Jack Chobanion-PC, & Fred Brani-MIS) was already under way with our oversight, as well as Graco-Minneapolis (Gene Laguban-MIS & later at Allen-Bradley, TLA & Oshkosh Truck) so they implemented their own customized RAMAC BOM maintenance system, but without the processing burden of the where-used cross linkage functionality.

  11. Joe Orlicky's net-change influence at J.I. Case: Restricted by disc capacity restraints, Case used the 6 character RAMAC file address for its part numbering and direct file linkages. They stored a combination of a 20 day erection schedule plus a year of monthly buckets to maintain both a forecasted MPS horizon and also the shorter term configured customer order final assembly schedule (FAS). A configured 15 character generic coded model numbering scheme (Features & Options) was used to select pre-configured modular option BOM’s during the explosion routines, which later showed up in our logic of the IBM PICS MRP-RPS and the subsequent MAPICS package.

  12. BOMP became basis for packaged manufacturing software: With the resources of the Milwaukee Sales Office, and the contributing customer staffs, we programmed the “BOMP” Bill-of-Material Processor (then fortuously changed from being called “BONG”!) in assembly language packaged for use initially on the smaller 1401 disc system, recognizing that the contributing customers would also use the package on larger equipment. Cutler-Hammer even modified the where-used chaining to utilize four balanced anchors to reduce the chain chasing for deletions within long where-used linkages on their larger 1410 hardware. Many additional implementations of the BOMP package were developed in the Milwaukee area, until we figured that we could advance the concept of packaged manufacturing software usage to cover the requirements planning functionality.

  13. Further development of manufacturing applications by Milwaukee companies: Following from the customized JI Case implementation, many additional functions were developed around BOMP from the Milwaukee IBM staff’s influence during the mid ‘60’s. Jim Burlingame, VP-Mfg at Twin Disc in Racine (and later the President of National APICS) fostered a MRP explosion program with multi-level pegging of specific sold customer requirements intermixed with the forecasted, lot-sized planned orders. He also built on top of his current accounting procedures that “charged off” the built-up “standard” costs as finished To-Order products were shipped. These transactions were too late to be used for netting WIP, so they fenced up the stocked componentry BOH’s and then hard-issued them to assembly WIP balances. Then they charged-off the assembly work order BOM componentry upon shop floor completions, reducing the WIP componentry part balances to support subsequent netting during the daily MRP net-change processing. We were all chagrined about the inaccuracy of the originally manually maintained BOM’s in those days, and had to wait for successful coordination of the different files from engineering, accounting and production control silos over another decade. But, in the meanwhile, this accuracy building cultural issue was coined in Racine as “back-flushing” (subject to initially fostering “crappy” results), and began a learning curve leading up to today whereby the accuracy issues are normally taken for granted due to the success of BOMP maintenance!

  14. Development of the first packaged MRP software products: With the assistance of a small customer, their VP-Mfg of Milwaukee Faucets (Sandy Kohn), participated by writing the netting, lot-sizing, and time series off-setting logic in RPG for the BOMP summarized explosion routines. In those days, most data processing (Tab Mgrs.) departments reported to financial management, and product structuring was characterized as designed to maintain inventory valuations. There were excessive levels in the BOM’s to accommodate identification for counting inventory such as cut-length part-numbering, sub-assembly drawings, semi-finished raw materials etc. Lot sizing at these levels complicated explosion netting and time bucket off-setting. The problems were exacerbated with the common use, then, of order point replenishment procedures executed independently from the BOH’s at every level. Level-by-level explosion processing gave the opportunity to gate lot sizing in synchronization with levels above and below to save excessively generated safety time and stocking levels. We published the programs as the first packaged MRP system called LAMP (Labor & Material Planning). His company was still using punched card equipment to maintain an inventory BOH stock status, so we used a bank’s 1401 Disc system to maintain and store the BOM’s. He then generated gross requirements for subsequent netting during a punched card stock status inventory report back at their plant.

  15. Early design of MRP level-by-level, time-series, explosion logic: As can be observed during this early era of packaging manufacturing (MRP) functionality, progress steps were always contingent on computer technology advancements. An MRP explosion process (then commonly referred to as ‘GNETS’ – gross-to-net explosions, time-series) was very computing intensive with an extremely high volume of generated transactions against a massive data base (BOM, Routings, & Inventory/Order status). From the original Milwaukee area experiences with design direction, we structured the GNETS processes to accommodate the currently developing computing power as well as projected functionality. As a result, most of the original designs were structured for ‘regenerative explosions’ with monthly time bucketing for long monthly processing runs—normally over week-ends! Orlickly’s influence at Case, Racine was a very significant step up in functionality, but required much more sophistication in the GNETS logic. They followed some of the logic prevalent in the automotive industry where requirements were maintained to cumulative build schedules of a model-year. Orlicky retained some of the cumulative logic (especially involving behind schedule bucketed balances), but broke the regenerative cycles down to be preformed as often as weekly, or even daily as-required. They also maintained both a monthly forecast horizon for about 2 years, and a separate shorter term actual final assembly schedule (FAS) supported by a configured channel order backlog. From these experiences we architected the GNETS process in such a fashion that it could be utilized for both regeneration and net-change cycles, with an eye to future real-time processing (that still has not been universally delivered). The forecasted and made-to-stock (MTS) marketplace wouldn’t soon press the need for these shorter processing cycles, but the churning of the To-Order (MTO) product lines with configuration requirements would provide most of the pressure for advancement of the state of the art. Most of those critical of MTO Lean MRP functionality still don’t realize that they should turn off the Master Scheduling (MPS) of forecasted planning with suspect planning BOM’s, and work with only with the configured sold order horizon! The GNETS logic would remain similar for real-time cycles, but the computer data base upgrading would become enormous.

  16. A J.I. CaseTestimonial re: Use of BOMP as a Generic File Organizational Tool: In an IBM Application Brief Publication of implementations of the late ‘60’s titled “The Production Information and Control System at J.I. Case Company—Burlington Plant” (GK20-0364-0 3/70) featuring the implementation history of the BOMP-RPS system, a byline by Orlicky’s team was included as: “The bill of material processor program represents support by IBM in the area of manufacturing application and implementation. It provides the support to organize, maintain, and reorganize the four basic manufacturing data files—part number master file, product structure file, standard routing file, and work center file. Case has found, however, that the program can be used very effectively in other application areas as well as in manufacturing. Their views are expressed in the quotation from a letter to IBM. “….this is one of the most powerful tools……for the purpose of organizing direct access files. We are presently using it not only in areas for which it was designed, such as the parts master, routing, etc., but….also….in payroll, and we will be using it in our order entry and accounting areas…..it is misnamed and….should be listed and supported as a “file organization program” rather than as a “bill of material processor….”

  17. Internal IBM World Trade training on BOMP & LAMP: In late ’64 I traveled to London, Paris, Sindelfingen, Milan, & Stockholm to present the BOMP and LAMP software to the World Trade IBM’ers covering the European marketplace. During this trip, I met with Werner Kraus and Gunter Evers, authors of the German finite scheduling/sequencing software called ‘Kraus’- later to be upgraded to CLASS and CAPOSS. As we discussed the differences in marketplace approaches, we envisioned the merging of finite scheduling and MRP into a single system, but it might take 10-15 years for the cultures to coincide! They related the German situation as a scheduling intensity where any production schedule changes would only be added to the end of the horizon because all plants were overcommitted anyway. I related our US situation as that of a Sales Mgr. walking thru the shop floor with a bottle, ‘suggesting’ schedule accommodations to cover his favorite customer’s changes! It has been 50 years and neither of us are where we prognosticated! My Baltimore IBM staff installed ‘Kraus’ (the first APS—Advanced Planning System in German) at the Koppers Piston Ring plant supported with the data collection dispatching thru a control center later in ’68. They were processing an average of 34 machining operations per shop order in a randomly pathed, functionally organized shop floor layout!

  18. Publishing of MRP-PICS application software: Subsequently, as Mfg. Industry Staff Mgr. in Chicago in the mid ‘60’s, we initiated the development of re-packaging of the systems as PICS-RPS (Production & Inventory Control System--Requirements Planning System, E20-0280-2), subsequently managed by Pat Reilly and Will Couch of IBM’s Development Staff, and we published it as an application brochure for use internationally (the third edition copyrighted in ’68). It later became the basis for IPICS, RICS, MAPICS and COPICS.

  19. Data-base evolvement for manufacturing applications: Another Racine company with DOD traceability requirements added modifications to the BOMP logic where Bob Haddox, MIS Mgr of Sunbeam built support for a “chain-off-of-chains” data base traceability capability. This architecture was utilized later at Black & Decker by Bob for calculated net-change componentry use-up balancing, again supported by my staff as the then Baltimore IBM Branch Sales Mgr. It later showed up in the architecture of the MAC-PAC lot control and traceability functionality.

  20. Influential professionals contributing to manufacturing software: Several newly formed MRP software firms and consultancies were originated later during the ‘60’s utilizing the BOMP architecture and backgrounds: Dick Lilly’s Software International (SI) using the BOMP tree logic for MRP and general ledger structuring; Dick Ling’s Arista which became XCS and Glovia, and who later founded the S&OP (Sales & Operations Planning—MPS); Tom Nies’s Cincom’s system utilizing a speedier file organization procedure for the item masters; Romey Everdall, Nick Edwards, Tricia Moody & Woody Chamberlain of Rath & Strong with fully pegged requirements planning (PIOS) for aerospace traceability; Dale Colosky’s Computer Strategies (CSI) first 2nd tier automotive support with cumulative-repetitive, order-less processing; Gerry Roch user's architect of M2M; and Jerry Bowman (of our Milwaukee staff) as Founder of Fourth-Shift MRP. Books, consultancies and publishing’s were added by highly acclaimed George Plossl & Ollie Wight ’68 w/Chris Gray's Standard System Specs; Dave Garwood's BOM consulting '70's; Joe Orlicky's Net-change MRP ’75; the first bill-of-material handbook by Dick Bourke later in ’75; and Terry Schultz (from my Milwaukee staff) partnering with Dave Buker Education and subsequently as Founder of the Forum Consultancy. Subsequently, Andersen Consulting (now Accenture) purchased the package thru MRM, the Milwaukee firm I helped begin when I left IBM in ’69. It was reworked & patented under Project Mgrs Bill Darnton & Jeff Rappaport, as MAC-PAC RPG and then revised again as MAC-PAC DOD from work done by the Minneapolis Comserv group emanating from Milwaukee Allen-Bradley experience. All subsequent MRP systems used various offshoots of BOMP functionality as their base file organization—the where-used linkage was really the first randomized "relational" support for disc data base architecture.

  21. Manufacturing 'fraternity' of influential practitioners: As a result of these influential relationships who frequently communicated professionally, during an engagement to Cummins Engine in June, 1971, I arranged a “fraternity” meeting to cover advance MRP techniques utilizing large scale disc systems continuously processing GNETS re-planning explosions. The invited attendees included a number of those related to BOMP-MRP development during the ‘60’s including: Jim Burlingame & Bill Wassweiler, Twin Disc; Jack Chobanian, J.I. Case; Larry Colla, Cutler-Hammer; Bob Haddox, Bob McKain & Carl Euker, Black & Decker; Dave Hargrove, Pat Murray & Jim Fortier, Baxter Labs; Herb Friedman, Thomas-Friedman Assoc’s; Ted Musial, Tom Long, Ray Fritch, Sam Huffman, Herb Pereyra, Cliff Smith, Ed Souders, of IBM; Gene Laguban, Allen Bradley; Duane Segebarth & Bud Vogel, Deere; plus key management of Cummins including Bob Blair, Gale Shirk & Earl Hahn, S&DP; Hal Smitson, Plt. Mgr, and Bob Dice, John Wertz, Leo Underwood & Doug Taylor, PC. The participants viewed Cummins' pioneering conversion from large scale processed tape systems to disc oriented relational data base (IMS DB-2) architecture. They were able to gain a decade advantage against their competitor, Caterpillar, who was still stuck with monstrous shadow file functionality against legacy batch processing!

(B) As the development of MRP packaged software evolved during the ‘60’s, many marketplace observations have been made from my exposures. These outlined issues will be discussed further in impending blogs:



  1. Engineering mechanization was still doing their own thing: Early engineering support was primarily reticent of any changes to commonplace assembly drawings with associated single level parts lists as extensions of the title block. Computer assistance had not delved beyond customized programming, and there wasn’t any exposure to pre-packaged software with a successful track record. There were also numerous occasions of “tabulated” drawing formats where multiple versions of componentry were displayed in separate columns of options. This exacerbated the complexity of data base structuring with a dearth of duplications of common components across the various columns of parents. Part and dash numbering was very inconsistent and there was not a lot of company M&A consolidations, yet, that would force more engineering attention to design consistency.

  2. Engineering was still a tough sell: It was difficult to sell the engineering staff on changing for consistency until the where-used tabulations, commonly manually listed on the drawings, was proven as inaccurate and a maintenance headache. It then was beginning to occur that computerized automatic where-used processing was shown to be much more accurate and cost effective.

  3. Engineering had to do most of the work—downstream functions gained the ROI: As the engineers saw it, though, they had most of the work to do to maintain the computerized BOM data, primarily for use by cost accountants and production control planners. It was always hard to get the manufacturing engineers (ME) to co-mingle with the design engineers (DE) to execute cross-referenced dual maintenance of both of the BOM and Routing files. There was still a lot of “I want the file secured without anyone else being able to change what I have completed” and “we won’t know what routing operations are associated with the componentry until later in the design process”.

  4. Downstream ROI value precipitates intra-departmental Engineering cooperation: The most effective implementation is one at Goulds Pumps where I have seen the DE (Design Engr.) and ME (Mfg. Engr.) with their desks side-by-side coordinating every change to the combined BOMP/Rtg database and subsequent rules-based configurator. This has become very basic to redevelopment of the DE & ME structuring necessary to support shop floor cellular determinations and positioning of componentry usage. It also provides the assurance that the maintenance of assembly and fabrication instructioning remains consistent.

  5. Mass-customization and Configuration inter-relationships evolve: Too many companies are still having a struggle to move from a make-to-stock strategy toward mass-customization make-to-order. In contrast, to-order systems are predicated on the concept that the majority of product structuring should not be attempted until the customer's specifications are actually ordered and/or changed. According to a published survey of large manufacturing companies, as long ago as '96, by Bruce Richardson of Advanced Manufacturing Research (AMR), "73% were then scheduling lines based on actual orders, rather than a (exploded) plan" -- or, to repeat: "didn't make it until they sold it!" Accomplishing this formidable goal requires the use of product configurator software that not only eliminates the need for 'hard" bill-of-material pre-structuring, but also drastically reduces the attendant maintenance of any remaining product pre-definitions. Then rises the remaining key issue to observe; it's finally the time to seriously invest in the potential to reduce the impact of excessive un-pegged lot-sizing and operational set-ups. This normally evolves into BOM assembly flattening, in-line consolidations, feeder routing attribute configuration, and shop paper work presentations. For evaluations of serious interest in Lean processes, I look to movements of facilities in the fabrication and assembly feeders toward family orientations and SMED-like (Single Minute Exchange of Dies) activities. However, all the various “To-Order” strategy nomenclatures (Engineer-to, Make-to, Assemble-to, Fab-to, Machine-to, Configure-to, Schedule-to, Dispatch-to, and Field Maintenance-to, etc.) seem to miss a single common denominator—that of the need for a close degree of involvement with a high mix of product options and product configuration. Simple configuration can sometimes be accomplished by merely adding top-level hard part numbers for each variation of options. This, however, quickly grows extensively out of hand which dictates the need for use of rules-based configurator software. This requirement becomes oriented to producing BOM’s and routings on the fly from selections of customer requirements for catalogued “sales-features & options”. As well, it then normally initiates the need to support a web orientation of these sales option selections by the sales channels for automatic processing of quoting, pricing, cross-selling, and costing.

  6. Configurator software evolving similar to BOMP/MRP specifications: Configurator software has developed over the past two decades leading toward support for web based front-end sales (feature & options selectioning) to accommodate quoting, validation, pricing, cross-selling, and separately, subsequent back-end support for BOM, Rtg, costing and scheduling. One of the earliest development of configurator software was done for McQuay HVAC by Thomas-Friedman Assoc. in the early '70's. We utilized BOM part numbered tree logic to select modular componentry from planning trees to structure to-order BOM's for subsequent GNETS processing. Andersen Consulting's MAC-PAC followed on with similarly coded serialized node numbers under separated maintenance. A few other firms turned to complex Object Oriented facilitation for processing, requiring programming staff experience with AI-like (Artificial Intelligence) structuring. Another configurator (Configuration Solutions LLC's "FLOW") has even delved much deeper into MRP functionality providing real-time maintenance for time-phased pegged component requirements/allocation, work-center loading/scheduling, cellular sequencing and back-flushing. The real-time status of its e-Commerce database is used to support on-line broadcasting of shop paper-work instructions, CAD-CAM generations, time-phased scheduling, and set-up sequencing for dispatched ‘pull’ requests from focused-factory cellular work-stations. These pegged allocations are also used for subsequent back-flushing generation with host interfaces from transactions at the appropriate cellular work-stations. This functionality can be integrated with any MRP host to represent an independent event-driven sub-system to support Lean mass-customization.

  7. Interjection of Bottom-Up searching vs. Top-Down object treeing for Configurator architecture: In a manner of recasting innovative historical events, Fred Brani (an MIS Mgr at JI Case with Orlicky) was frustrated with the vagaries of top-down or object-oriented tree structuring of modular BOM’s utilized for configuration. (These included such vaunted techy terminology characteristics designed to rescue the life style of Artificial Intelligence 'AI' proponents who were just discovering an applicability of tree-logic such as: rules-objects, finite-domain constraint propagation/solvers, fuzzy logic, truth-tables, class hierarchy structure inheritance, & multi-level nesting). Instead, he invented a bottom-up ‘seek’ engine (somewhat analogous to a functionality similar to current search engines) approach in the mid ‘80’s after working with Square-D and assistance from University of Illinois professors. We came together again in ’88, as ‘Logia Inc’ to market a variable length selected attribute string seek technology that greatly simplified the syntax for configurator rules. It structured the feature/option coding into cataloging terminology familiar to sales, engineering, and manufacturing which was so successful that it became the common supporting ease-of-communication across the various company organizational silo’s. Seeking engine performance, with similarities to search engines, and as everyone now acknowledges, has exponentially exploded. Now the technique known as BURBS (Bottom-Up/Rules-Based Structuring) fosters fantastic processing speed. All selected feature-options were concatenated as an unlimited variable length string (analogous to the limited 15 character Feature/Option generic coding originated at J.I. Case 20 years earlier) which acted like a search argument against rules structured like home page meta tags for the conditioned results (parts, operations, prices, dimensions, etc). The seek engine speed performance has increased so exponentially that all applicable independent rule sets can be processed upon each click to the selection user interface (GUI). The automatically generated attribute string is maintained for every change to the configured sales order line-item, (re)generating any output format for quotes, acknowledgments, shop instructions, parametric CAD, back-flushes and field installation upgrades. This same process also provides separately formatted Web order and shop status inquiries to internals, customers, and suppliers under security controls maintained by users. As sales orders/changes are accepted, the configurator software (re)generates and maintains (real-time) the time-phased pegged supply/demand structures for key componentry and load centers for (re)calculating promise dates. All shop floor transactions, including ‘Lean-Pull’ calls for focused-factory dispatching of sequenced cellular schedules, can easily emanate from web browsers to update the attribute string providing real-time current status (perfect truth!).

  8. Finally, a common terminology across silo’s: Involvement of configuration functionality has a tremendous potential to be reviewed with value-streaming activity and join-to-support the effort to tie in all the silo’d organizational entities. Just the process of development of product option rule syntax definitions, which will become the common denominator of internal terminology collaborations, will bring surprising visibility across “silo’d protection’isms”.

  9. Familized Focused-Factory shop and MRP functionality evolve: The definition of a manufacturing “Job-Shop” was normally used in the early MRP era to describe a functionally oriented machining, fabrication and assembly shop floor layout where componentry was moved randomly to & from various common departmental work centers (often in batched lot sizes, but also in single-order piece lots). The work centers normally stacked up these lots, therefore engendering elapsed time for move, queue and dispatching for scheduling & loading lead-time calculations. As these job-shop facilities progress toward a Lean functionality, they normally assess their products and begin to look to families of product where a large proportion of componentry and operations are similar in nature. As To-Order strategies (even mass-described under the moniker of mass-customization) becomes more prevalent, the market for meaningful “Lean” functionality strongly suggests more “Focused-Factory” initiatives to provide specific measurable results. Packaged software has also been significantly improved which assists the layout structuring (‘fish-boning’ the sub-lines) of the cellular feeders. As Value-Streaming catalyzes focusing factory layouts, it is then time to thoroughly review engineering re-structuring. There can be flattening of BOM’s to eliminate sub-levels that were probably results of inconsistent DE, ME and accounting reasoning, (including Production who wanted stocked in-process levels). Also, the development of routing operations to be option sensitive (for configured-to-order), will be needed to support the focused lines. Responsive ease of use for ongoing engineering rules change maintenance must be the practice to maintain flexibility.

  10. We’re still trying to define where LEAN boundaries are: MRP bashing, such as the tirade over the past decade of the JCIT crowd to blast its workings, has a tendency to forgo the details of the elements piqued. Even though the MRP bashers berate the whole of the MRP/ERP, they often back off to separate the criticism and relate it only to shop floor activities. It is often left to residual values to continue to include support of the financials allied with ERP (GL, AR, AP, HR, SOE, Billing, COS). But then, they separate the longer range planning (MPS, S&OP, & Planning BOM’s) for warranted criticism. Inventory accounting, purchasing/receiving, and back-flushing are normally accepted. This primarily leaves the basis of MES (Manufacturing Execution Systems) type activities as the target for Lean replacements. It’s becoming a joke, however, about the various ERP systems that add a kanban function or maybe overly precise APS finite scheduling and then claim their legacy as ‘Lean’ (but not very simplified!).

  11. “Really, real-time”, is now becoming the requirement to advance functionality: If you look at any MRP series of modules, it is normally easy to disconnect the function of the MPS forecasting, S&OP leveling, and percentage planning BOM explosions. What’s left can normally be used as assembly scheduling for only the sold order horizon (actual customer orders). The problem is most likely to be the cumbersome net-change capability that only runs in periodic batch cycles. It is difficult to find any more modern MRP systems that can then explode these net-changes in real-time. There is an old adage that we are beginning to walk into: the desire for “a single version of truth.” Until transactions can be readily and easily updated and tracked, it is difficult to see where more software can be of significant help with the Lean activity. Therefore new real-time architecture remains to be the limiter of software support of many Lean activities. In the meantime, visual representations of replenishment requirements (kanban & whiteboards) is the best of “real-time” embodiment for the majority of componentry. The problem comes with representations of the remaining key componentry, i.e., pegged allocations/requirements of the sold order horizon remaining in the FAS (final assembly/fabrication schedules).

  12. A real-time Focused-Factory truth sub-system: If a company is heading for a significantly greater mix of To-Order mass-customization, any MRP support of Lean shop floor activities will have to become sensitive to re-architecture of the To-Order configuration process. If the shop floor single version of truth can become a real-time support sub-system, integrated with the other batch oriented MRP modules, then the MRP basher should re-concentrate on the human-factors and attributes for the real-time truth sub-system.

  13. Real-time coverage of key componentry—visual replenishment for the rest: In working with To-Order family product lines, I have found an amazing agreement that after structuring componentry as line stocked, (visually replenished fabricated and purchased kanban-like parts), they end up constituting about 85% of the flattened item-mastered part numbers. This gives rise to the potentiality of eliminating much of the internal inventory transactions required to support classical MRP netting. Receiving and 4-wall back-flushing is all that would be required, even for accounting! Most PC professionals can readily determine what the 15% of part numbers left are that really need the benefit of truth controls. Therefore, if the MRP or new truth sub-systems can work with this 15%, there may be a way to utilize the normal net-change capability of the classical MRP software in a more real-time truth mode.

  14. Event-driven real-time MRP truth transactions pitch back to batch MRP: All MRP systems, so far, have been basically built on a batch processing architecture. Some are replacing a few sub-systems (normally those that become web based) which operate in real-time on shadow files, and then interface with their host legacy architecture to process batches. (An example of shadow file processing would be ATM transactions that provide immediate available balance updating and credit checking, but don’t process the ledger accounting balances until the nightly batch updating). An analogy to MRP architecture would be where any customer’s option mix change executed via the web would immediately (re)acknowledge the changes’ ramifications including validation and pricing. Then costing/margining, (re)pegging of allocations for material & labor load constraints, (re)sequencing a next ‘pull’ of manufacturing instructions (BOM & Rtg) accessed from any focused-factory cell, and changing requirements to key vendors—would all be processed real-time via the web!. Practically, this ‘event-driven’ processing might be applicable to only the 15% of key componentry and front-end cell capacities—the remaining left until the legacy batch recycling. After all, white boards are normally posted as changes occur, not in subsequent recycling batches!

  15. Simplify shop floor scheduling & sequencing in focused-factories: It may be an overkill to utilize classical MES software to provide the truth system for the Lean shop floor. The majority of To-Order companies do not need the classical traceability archiving and compute intensity of finite capacity scheduling for a focused family line. The front of the line (or bottleneck station) is all that normally needs loading to capacity. It should be noted that the early APS (late ‘60’s Advanced Planning Systems w/finite scheduling—IBM Germany’s Kraus-CLASS-CAPOSS series) had its beginning with the functionally oriented factory layouts requiring completely random shop floor paths and significant work center queues. It makes one wonder whether that degree of complexity has seen its day as flattened BOM structuring and focused-factory Lean-Pulls begins to simplify it! It is also normally an overkill to post excessive operational feedback transactions (labor tickets and componentry issues) requiring sophisticated data collection, unless it’s required to support the precise status for finite (re)scheduling or traceability. Work order operational labor feedback posting is normally a non-value-added cumbersome burden often requiring IE staffing to maintain labor rate standards which could be better utilized to reduce setups.

  16. Stick to the “sold-order horizon” configurator connection to shop floor: If the MRP modules cannot be utilized in an event-driven truthful net-change mode for the sold-order-horizon only, then the rationale needs to be looked into for reconstitution. Often the underlying problem is a configured BOM and routing issue that is required to populate the shop floor order system to support real-time order entry, change activity, and timely dependable back-flushing.

  17. It’s time to re-architect pegged componentry in real-time: Therefore, what may be desired is an integrated sub-system, emanating from web based quote-to-order, with real-time, time-phased scheduling/dispatch sequencing of the 15% of pegged customer order requirements. These would be subsequently broadcasted on to the shop floor (like an MRP supply/demand format, but in real-time as ‘pulled-for’). Then the remaining MRP modules and new truth pull requirements can live in harmony for software support of a fully commissioned MRP/Lean system. Either the ERP providers, or best-of-breed sub-system suppliers will need to step up to these re-architectured requirements over the near term.

  18. Companies have already forged these approaches—it really works: A couple To-Order companies making dimensionally variable stroke length pneumatic cylinders and furniture product lines have had experience during the mid-90’s with the development of these event-driven Lean sub-systems (called ‘FLOW’ w/ Focused-Factories), underlying their classical MRP systems. As a configurable customer quote or order arrived, the rules engine would immediately generate its specifically engineered BOM’s and routings. The key 15% of its componentry’s were detail pegged as time-series allocations, then simultaneously checked for their material inventory availabilities and focused-factory critical cellular capacities. The order could then be automatically priced and costed/margined for shipment confirmations. The manufacturing instructions would be generated for broadcasting shop floor ‘cutting tickets’ as requested for each cell’s sequenced pull schedule. Normal use of the web configurator software by the shop floor users, while calling for dispatching pulls, was designed to provided sufficient selected shop floor feedback transactions for internal and customer order status inquiry. (If the shop paper/instructions would not be available in hard copy form until called for by cell personnel in time for sufficient set-ups, execution of the web transaction would constitute closely timed progress transactions. Of course, the dispatch queue listings would be available real-time from the secured web files). They would also initiate the provision of the back-flush transactions by pegged operation to maintain assumptive inventory netting balances and accounting variance transactions. All of these transactions would be provided and maintained in real-time, really believable truth! One interesting experience, however, was where the Production Control (PC) manager requested a modification of the real-time functionality to offer a variable time lapse prior to processing the shop floor changes, (which would, otherwise, result in automatic re-sequencing of the dispatching queuing). They wanted to run short processing batches (called ‘waves’ several times per shift) when they were able to sit down to evaluate the re-scheduling (as a ‘what-if’)! It was so much more successful working with truthful status data!

  19. Simplification with this sub-system provides unimagined benefits: The two final tests of this capability might well relate back to the accounting controls and customer visibility of the shop status which will be finally acknowledged as the most important features. As the 85% of line stocked componentry is visually triggered for replenishment, it is receipted accurately to shop floor point-of-use. When it is then timely back-flushed to support randomly executed cycle counting, (often when a kanban is executed) inventory stock levels may not need the support of physical balance-on-hand transactions such as hard issues. The cycle counting only needs to assure that the back-flushing BOM accuracy and scrapping procedures remain within tolerance. Even the job-lot replenishment manufacturing orders can be launched, dispatched and status’d thru the real-time configuration system. So much for all the history of concentrated education training to maintain fenced-up accuracy of inventory balances to support the logic of MRP systems! It is also easy to observe that there is an ease of allowing secured web customer/channel inquiries into the status of their orders on the floor. It will provide a powerful incentive for all silo’s to participate in adding creditability to a single version of the truth!

  20. Integration of real-time sub-system to batch MRP utilizes standard back-up routines: As Lean intentioned companies begin to assemble their focused-factories in family oriented lines and segregate their similar component usages as the 85% to be covered under visual replenishment procedures, they are well prepared to initiate a real-time configuration/GNETS sub-system. Interfaces to their host MRP system, whether separately for legacy MTS product lines or converted real-time Lean MTO families, are always easily structured utilizing imbedded frequent-batch updates common to all legacy MRP systems. With web based Customer/Channel facing for (re)configuring a quote and promising acknowledgments, the real-time sub-system can provide the ultimate support functionality of event-driven transactional control. Most of these functionalities can be independently implemented with few pre-requisites on web facilities and built to develop strong employee support of the Lean journey.

  21. Prognostication of how MRP will continue to migrate toward Lean and Flow: In summary, the next decade in the MRP-to→Lean era is likely to be: (1) retention of most legacy MRP systems, (2) evolvement toward a To-Order Lean mass-customization strategy, (3) including execution of visually (kanban-like) replenishment for 85% of componentry, (4) implementation of real-time web based configured-to-order, time-phased pegged requirements sub-systems (FLOW) for the key 15%, (5) provision of truthful real-time factory, customer & vendor status, (6) and integration with the batch catch-up legacy host systems. It can be so simplistic as working with all organizational silo’s to accurately maintain a communication system that everyone can believe in—a fully truthful status!
(C) Since I am not experienced in use of blog activity, I am sensitive to the use of this posting as marketing promotion and choose to end this initial two-fold outline without covering the specifics of real-time configuration software that I have been personally associated with from Configuration Solutions LLC, (subsequently acquired by Consona MRP Inc-M2M). It will follow in a third response.


In the interim I suggest these links to my White Papers on the web home page at


Configuring Collaboration for Lean Mass-Customization Manufacturers http://www.configsc.com/pdfs/whatsNew/WhitePaperCollaboration.pdf


Product Configuration for LEAN "To-Order" Industries (A case study) http://www.configsc.com/pdfs/whatsNew/WhitePaperToOrder.pdf


Consona-M2M acquisition of Configuration Solutions LLC, namely:


http://product-configurator.consona.com/


Gene Thomas, Founder Emeritus, 847-382-0680


gthomas55@comcast.net http://mrp-to-lean.blogspot.com/



© Copyright Gene Thomas 2007 All Rights Reserved