Introduction To MandateIP® Database Architecture

Download PDF

Dependable information systems require a stable and responsive means of storing and retrieving data. This paper describes the MandateIP&#reg; database architecture and how that architecture provides the highest possible level of data availability while meeting the transactional demands encountered in real-time applications. The paper is divided into three main topics. The first topic addresses the actual database engine and the architecture for integration with the application programs. The second section addresses the challenges faced by the MandateIP&#reg; database architecture and how those challenges are handled. The last section describes the availability and recoverability of data in abnormal conditions (failure handling and recovery).

Database Engine And Application Process Integration

The MandateIP&#reg; database architecture is based on an “SQL” database engine. The architecture itself does not dictate the use of any particular vendors engine although the various vendor’s engines have their own individual strengths and weaknesses. The selection of the particular engine is primarily based on the user or client’s preference. Initially MySQL had been the preferred engine primarily due to its performance, replication features and its licensing policies. Oracle has been a close second and with recent MySQL licensing changes Oracle and MySQL have equal preference.

As with the base MandateIP&#reg; architecture, the MandateIP&#reg; database architecture is a “distributed” model where processing may be seamlessly spread across multiple computing elements. The MandateIP&#reg; database architecture identifies related datasets and then allows the assignment of an individual dataset to a database engine executing on a particular computer. Application processes closely associated with a particular dataset are assigned to computing elements that have high accessibility to the computer in which the dataset is hosted. This architecture provides a natural scalability allowing the platform to be used to implement an extremely broad range of systems. Systems with only minimal performance requirements to systems with massive performance requirements can be built on one platform architecture.

Challenges Of Dynamically Optimized Systems

Real-time or dynamically optimized systems place some challenging requirements on data availability. Real-time systems are systems that require decisions to be made within a guaranteed timeframe. Dynamic optimization makes decisions “on-the-fly” where the decision is based on current system conditions based on a set of rules for making the decision. This makes it necessary to have availability of the data to make the decisions within the required limited timeframe. Since many VAS systems involve the control of mechanical equipment, many of the required time frames may be very short (milliseconds). An example of these requirements is one in which an “optimized” sorting system continuously sorts 36,000 units per hour (10 sorts per second), and each sort decision involves 4 separate events (144,000 transactions per hour) for the sorting operation only. This system must make individual decisions within a 200 milli-second window decisions.

The database architecture of MandateIP&#reg; meets these challenges through a number of related techniques and procedures. MandateIP&#reg; database architecture uses an “event driven” model where events are communicated through messages. For events that alter a dataset, the application process processing the event is responsible to update or modify the dataset.

Events initiate dataset modification—dataset modification does not initiate events. By isolating datasets to individual computers, data availability is increased since multiple CPUs and disk drives share the responsibility. Additionally, restrictions of database access can be easily made insuring that the real-time processes have timely data availability. The MandateIP&#reg; database architecture provides tools (and reporting means) for accessing data that do not impact time-critical processes. It is not recommended that MandateIP&#reg; datasets be accessed through other means. Likewise, the computing platforms used in a MandateIP&#reg; system have been selected to allow the required processes to function within the real-time required timeframe. It is likewise possible to interfere with system operation CPU and data resources are used outside the architecture provided.

Data Availability And Data Recovery

Data availability and thus the underlying data architecture is never an issue during “normal system operation” in a properly designed system. Data availability only becomes an issue during and following some abnormal or “failure” situation. This section of the white paper discusses data from a “system failure” and “recovery” perspective. The nature of a “system failure” is extremely diverse, and can range from a power failure, a computing hardware crash, a network failure and even a software bug. The MandateIP&#reg; data architecture supports multiple servers. Individual servers may have differing requirements in terms of performance, volatility, and recoverability of data. Some servers with extremely low volatility (very limited changes) may only require backup procedures to insure data availability. The servers that have high data volatility and high transactional demands are normally supported with RAID 5 hardware controllers incorporating battery backed up cache and SCSI drives.

Some systems may have some even greater challenges. In extreme cases the MandateIP&#reg; architecture supports the creation of a “real-time transaction log” for the recording of data changes. The transaction log provides a means to re-build a highly volatile data set from a given point to allow for recovery from some of the more obscure failures (i.e. a failure of a hardware RAID 5 controller could potentially destroy an entire dataset). The method of using a transaction log for such cases provides a completely separate mechanism (CPU, drive controller, drives, power supply…) for maintaining data.

Normally VAS and the customer jointly agree upon the specific computing hardware for system implementation. If the customer has no preference, VAS generally uses Dell equipment. It is interesting to note that some “Computing Industry” terminologies have a somewhat different meaning to VAS as they have to others. In particular, a “server class” machine normally is thought of in the industry as a “high speed machine”. At VAS, performance of a machine is normally not of utmost importance, for our architecture allows us to obtain performance through distribution of work—additional machines. Reliability is of much greater importance to us than performance. This same feature of our architecture (distribution of work across machines) is why we have no real preference to a particular database engine.

With any of the methods mentioned above, the recoverability of operation has a “time restraint” in which the recovery is accomplished. The MandateIP&#reg; database architecture is designed to minimize this recovery period. Recovery periods for a failed RAID 5 disk drive are essentially zero; where as recovery from other failures and with other servers may take from several minutes to nearly an hour. Rebuild of a data set from a transaction log may take longer depending upon how much data must be applied to from the last backup point.

Conclusion

In conclusion, the MandateIP&#reg; database architecture uses third party database engines distributed across multiple computing elements to obtain the necessary performance to achieve the desired system operation. The architecture incorporates standard industry methods to obtain a high degree of data availability and rapid error recovery.

Why MandateIP®?

Download PDF

As we approach the corporate or information technology representatives of our future clients, a near universal set of statements and questions arise surrounding their need for our software. Typical questions and statements are:

  • Our software works just fine, that is not our company’s problem
  • Our software already does that
  • Why introduce a completely new architecture into our current, well constructed, conventional, proven architecture—an architecture that is serving us perfectly well?
  • We have a wonderful IT staff and the software they provide our organization includes exactly the functionality we need, why do you think that we need something different?
  • We have tried that before and it didn’t help

First of all, the need or benefit of MandateIP&#reg; in an organization is not predicated on anything currently “being broken” with the current information system. That is like saying a car with a manual transmission (stick) is broken because it does not shift automatically. The primary benefits of MandateIP&#reg; are equally applicable to organizations regardless of the state of their current software system!. Just what are those “primary benefits” of MandateIP&#reg;?

Better operational productivity and higher capacity

There are numerous VAS white papers on specific means of achieving these benefits, however, for the purpose of this paper we will address only one—handling exceptions. Why would we choose an insignificant opportunity for improvement as an example to show the benefit of a claim that MandateIP&#reg; can dramatically improve both capacity and productivity? Hummm…

As we take potential clients on tours of facilities that use our software, one particular facility tour seems to almost always lets them see what is so different about the way we think and how our software works. Early in the tour we pass some bulk pallet rack filled with license plates labeled cases of “reserve” product. We then pass a receiving conveyance system. At this point we describe how we handle and track the reserve product. We ask the question as to what they would expect if we were to query the information system about one of the pallet locations, one of the cases on the pallet or the pallet itself. We then ask them what they would expect to happen if we were to remove one of the cases on a pallet in the rack and then throw that case onto the adjacent receiving conveyor? In “normal” systems the case or carton would be “unexpected” for the conveyance system and would probably be routed to a specific area for processing. We are not normal—no one to our knowledge has ever accused us of being normal. Upon the conveyance system scanning the conveyor, our software determines that the carton or case cannot possibly be two places at one time (i.e. on the pallet in the rack and on the conveyor). We immediately update the pallet location (and pallet) removing the case. We then update the location of the carton indicating that the carton is on the conveyor. We then determine what is the “best thing” to do with the unexpected carton. The decision process is “opportunistic” in looking for the best way to handle the current situation. The rules for determining the “best thing” to do with the carton are unique for each particular client, and in this particular system we first see if the carton could fill an outstanding outbound order. If so, the carton is routed to shipping for labeling and shipment. Any pending pull for the shipping carton is cancelled. If we cannot ship the unexpected carton, we examine the current active stock level to determine if the carton could be used for replenishment. If so the carton is routed to the replenishment area delaying the future replenishment effort. Finally, and at last resort, we would route the carton back to reserve area for putaway.

At this point in our tour we usually get a “hummm—that is certainly different”. Hopefully, you too have had a “hummm” moment. If not, you are probably thinking, “people should never do that—if someone does they need to be fired immediately”. We agree, people should not do that! We are in no way advocating “chaotic” management of an operation. For those that are not yet “hummm-ing” don’t worry about the specific example we just presented, think of how such opportunistic handling of exceptions—change would work in other situations. Think of how so many things just get messed up by operations in the execution of the perfect plans your current software has prepared. You might also consider how changing production objectives are handled, how un-expected priority work is incorporated into current workflow. Then think “what if” and reconsider the example.

If you are not “hummm-ing” at this point, the balance of this paper will most likely not be of any value.

What are the implications of an “opportunistic” approach of dealing with unexpected events? The dramatic benefits that our software provides is the result of continuously taking advantage of every single event that occurs within the operation. We continuously take advantage not only of the exceptions, but we also take advantage of every successful execution of a task. The basis of the entire concept is in “how change is handled”. When we talk of labor balancing, continuous workflow, waveless processing, dynamic optimization, idle time reduction, all of these specific techniques are based on a single concept.

Another implication is the role of “planning” in a system. A “normal” system creates a plan “at once” and then expects the plan to be executed. To take advantage of “opportunities” that arise would require “undoing” the plan and then re-planning. We do not think that way (our moms were right). Planning is done in order to meet a set of objectives. The plan is not the objective. Planning is a means to an end. We look at an “evolving” plan where the objective is always kept in focus and every activity undertaken is to meet those objectives. The thought may arise as to how much less “thinking time” a computer takes to make an “at once” plan. That is really not the true, the same decisions (other than creation of rules for handling exceptions) to meet objectives need to be made for both a batch (at once) plan and an evolving plan. In the evolving plan those decisions are just spread over time.

Another implication of the “opportunistic” approach is the relationship between the computing system view of “what is” and the physical view of “what is”. In a “normal” system, an underling paradigm is striving to have the physical system meet the information system view. Our systems have a differing paradigm where the “data is a model” of the physical and should represent the physical view as best possible.

Another implication of this approach is the nature of the computing platform. MandateIP&#reg; systems operate on “real-time” computing platforms. “Real-time” in this context indicates that decisions are made on-the-fly and that responsiveness to these decisions must not impact operations.

There are other implications of MandateIP&#reg; based systems, and you will discover some of them as you contemplate the approach. We would encourage you to go back and think “what if” as you consider the benefits of our approach. The inclusion of MandateIP&#reg; does not imply an “all or nothing” approach. Our systems can integrate into your existing systems in specific operational areas where you provide us with batch data and we then “opportunistically execute” your and report back to the progress.

We will make a very bold statement that these techniques for dynamically planning, optimizing, and controlling operations will become the foundation of the next generation of systems for supporting distribution, fulfillment and production systems. Hummm…

Mobile Order Fulfillment Work Stations

Download PDF

Mobile fulfillment workstations supported with Smart Order Fulfillment Technology (SOFT™) offer the following features to be considered as specific business needs dictate:

Basic Functionality

  • Mechanical configuration to efficiently operate in the required stock storage area (size, maneuverability, reach, accessible height)
  • Customized design for material to be handled and for pick-zone aisles.
  • Support item validation by scanning the item’s UPC or the source location.
  • Support light-directed putting (and containers pushers as an option).
  • Operate as nodes of an R/F (802.11b/g) network.
  • Continuously optimize its operation, in real-time, based on pending work, job priority, and current vehicle location.
  • Support real-time exception handling.
  • Can provide the selector with two-way communication from any side of the workstation: voice, large or small monitors (standard CRT or touch-screen), lights and confirm buttons, carton pushers, keyboard and/or mouse, scanner.
  • Powered by rechargeable batteries, capable of 10 or more hours of continuous operation between charges.

HAWK Work Station Cart

Interface with Host System

  • Originally designed to operate as an independent module interfacing with a host system. In this mode, the host system performs order allocation and the fulfillment workstations receive orders as the locations from where to pick the items.
  • Also may support an operation where the fulfillment system manages the inventory within the serviced pick zone(s), performing order allocation within the zone(s). This mode allows the fulfillment system to further improve the system productivity. Under these conditions, the fulfillment workstations receive orders as the SKUs required for the orders.
  • Can report executed transactions back to the host system in real-time or as a batch process.

Dynamic List of Orders to Process

  • Does not require a finite number of cut-off times through the day for order release from the host to the fulfillment system.
  • Addition, deletion, and modification of orders in the list of orders to process are allowed at any time during the day.
  • Selection of the best order to start next in a workstation is based, first, on order priority, and then, on current workstation location.

Work Station User Interface

Adaptive Features

  • Display to the selector the list of all pending jobs for the orders currently in the workstation, in an optimized suggested sequence, based on the current workstation location.
  • Allow the selector to modify the suggested job sequence and re-optimizes the future jobs based on the selector’s decision.
  • Allow the selector to relocate his workstation at any time and re-optimizes the future jobs based on the selector’s action.
  • Allow the selector to scan any container not in the workstation and respond displaying all the pending transactions for the scanned container. Then, the selector can decide if he wants to process the container with his workstation, send the container to other zone, or leave it where it is.

Sharp Zaurus

Picking Path

  • Support user-defined picking paths independent of location IDs.
  • Allow easy addition and deletion of pick locations without re-labeling locations.
  • Support picking from multiple pick zones with containers passing from zone to zone

Cartonization

  • Supports cartonization
    • As defined by the host system
    • As decided by the selector
    • As calculated by the fulfillment system

Work Station Module

Virtual Batching

  • Support endless loops as picking paths. Such loops do not have a beginning or an end.
  • Allow selectors to add new containers to his workstation and to release containers from his workstation at any point of the picking loop.
  • Allows selector to increase the order batch size beyond the number of cells in the workstation.

Reports

  • A variety of reports available through the monitor in the workstation, that allows the selector to make better-informed decisions. Some of the report capabilities include:
    • Order status.
    • Container status.
    • Selector productivity.
    • Other custom required reports.

Long Aisles and/or Few Order Line-Items

  • Support multi-step picking (i.e. pre-picking).

COFE™ Optimization Modules for Sorter-Based Processes

Download PDF

Clustering or “batching” orders to be picked together is one of the most intuitive ways to improve productivity in a distribution center. Piece sorters (tilt tray sorters, Bombay sorters, cross belts sorters) take this clustering to the extreme of allowing hundreds, if not thousands, of orders to be picked together.

Piece sorters are an expensive capital investment. Also, they often become the main component of the process, determining the pace at which the distribution center operates. Regretfully, when combined with order batching, the productivity and capacity of sorters is erratic. There are periods of high utilization where the sorter operates almost at full capacity followed by valleys of very low efficiency. The sorter, as the main component dictating facility workflow, causes the erratic or cyclic efficiency to cascade to the other areas of the process as well as diminish the overall capacity of the distribution center.

Addressing these limitations, VAS has developed SOFT&#trade; optimization modules for sorter-based operations. These modules are based on VAS’ proprietary adaptive technology which continuously searches, in real-time, for opportunistic ways to maximize the usage of system resources by adapting to the changing conditions of the operation. This paper is an overview of how this dynamic sorter system optimization is accomplished and the resulting benefits.

The traditional way to operate sorters is using static waves of orders. The size of these batches is normally made as large as possible to take advantage of all sorter resources. As a rule, low efficiency periods happen during wave transition periods.

The SOFT™ optimization modules eliminate, to the point allowed by the system, the dependency on static waves, converting the process to a continuous operation. The SOFT™ sorter optimization module is “rule driven” and the rules are uniquely defined for each application. In creating a continuous process, the sorter work is broken down into small, non-separable “mini-batches”. The mini-batches are then started individually as the “opportunity” arises, and are also completed individually. Normally, product arrival provides the events that start and complete these mini-batches. SOFT™ sorter optimization de-links the selection and delivery of product to the sorter from the sorter induction process itself. As product arrives and is identified at the sorter, the SOFT™ sorter optimization examines the current need for that product. This examination is independent, as far as the rules allow, of the prior selection of the product. The rules for sorter induction normally prioritize arriving product to the completion of any mini-batch that is currently in-process. If no in-process mini-batch requires the product, there are a set of rules that define the initiation of a new mini-batch and its assignment to available sorter resources. These rules are somewhat more complex and also vary by application but for the purpose of this paper suffice it to say that mini-batches are continuously starting and completing asynchronously.

To the extent that waves are eliminated, so are the wave transition periods and the low efficiency valleys. The most immediate benefit of this approach is an increase on the capacity utilization of the sorter that allows an increase in the facility’s capacity to process orders.

VAS engineers have used adaptive technology to increase distribution center capacities for almost 20 years. Completed projects where this technology successfully increased the capacity of the facility include companies like The Gap, HEB Grocery, and Levi Strauss. In order to achieve the desired capacity improvements, the SOFT™ optimization modules have to coordinate, in real-time, the operation of several subsystems of the distribution center. The subsystems requiring coordination in applications can include picking, product delivery, product identification, the sorter itself, packing, and completed order takeaway. The modules include interfaces for these systems that minimize (if not completely eliminate) the required changes to those other systems. Where the current system does not support real-time communications, the modules add the capability to the existing system.

On a project-by-project basis the SOFT™ optimization modules need to be configured for the facility’s mechanical configuration (i.e.: piece sorter mechanical configuration) as well as for the specific business practice requirements of the customer.

Other benefits yielded by the SOFT™ optimization modules may include:

  • Reduced sorter inductor idle time
  • Reduced picker idle time
  • Increased picking productivity
  • Reduced packer idle time
  • Smoother completion of orders
  • Reduced need to stage early totes
  • Faster response to last-minute orders

The extent of these side benefits is application-dependent, as it is a function of the mechanical and process restrictions of each specific project.

Piece sorters are an excellent technology to cluster large number of orders to be picked together. They are also very expensive. Optimization of existing sortation equipment is the most economical means of increasing capacity of an existing system. It can also delay the need to construct additional facilities to increase distribution network capacity. In addition, the benefits yielded from these sortation system improvements may be carried forward to future operations reducing their effective cost.

AWMS™ Basic Module Checklist

Download PDF

The following checklist provides an overview of the “basic functions” available with the MandateIP AWMS. The MandateIP AWMS provides the basic or most essential features to manage the operation of a fulfillment or distribution center. An “Adaptive” WMS (AWMS) has an advantage over a traditional WMS in that it includes real time or dynamic optimization of the principal workflow. Dynamically optimized workflow adapts to the changing conditions found in fulfillment operations.

In this checklist, the distinction between WCS and WMS is not identified. The VAS AWMS is highly integrated with the WCS allowing this distinction to disappear. By integrating WCS and WMS functionality optimization of the decision processes is based on all available data making the resulting system more efficient and productive.

Each VAS AWMS system is delivered configured to meet the specific need of the customer. There could be some unusual circumstances where the required functionality was of such complexity that additional configuration was necessary. This is identified to the customer before the system is ordered. There is a set of “base” modules included in either an AWMS or WCS system. These modules are identified in the checklist as “Base” modules.

External System Interfaces (1 Interface Included In Base)

  • Host, Sockets, FPT, NFS, Async
  • Equipment Control Systems, Conveyance, Sorters, ASRS, Carts, Vehicles etc.
  • EDI interfaces—Direct, Through Host
  • Worker Interfaces, RF, PTL, Handheld, Voice, Mobile, Wired

Inbound Yard (Shipment Arrival) Management

  • Realtime Trailer Arrival Information Collection and Reporting
  • Yard Management, Trailer Locator
  • Dynamic Receipt Processing Prioritization And Trailer Unloading Scheduling

Receipt Processing

  • ASN / EDI Receipt Processing
  • Q/A—inspection management and scheduling
    • User Specified % Of Shipment, Vendor, Random Selection
    • Inbound Weight Check
  • Shipment, Vendor and Carton level holds and releases for allocation

Customer Returns Processing

  • Batch Hold, Re-sellable, Other Disposition
  • Like SKU “Recursive” Sortation For Piece Sorters
  • Add To Stock, Write Off

Dynamic Stock Disposition

  • Rules For Allocation, Strict FIFO, FIFO prioritization, Emptiest location
  • Directed To Putaway
  • Directed To Fulfillment (cross dock, cases, pieces)

Inventory and Putaway

  • Directed, Random, Slotted, Replenishment
  • Multiple Locations Per SKU, Multiple SKUs Per Location
  • User Defined Locations (Creation/Deletion -temp & permanent locations)

Split Case, Full Case, Residual Management, Cartonized/Open Case Flowrack

  • Overlap Storage, Retrieval an Cyclic Operations To Reduce Travel
  • Container Within A Container Concept, Container Identification By Member Scan

Cyclic Or Cycle Counting

  • Random
  • Scheduled
  • Opportunistic (Continuous Based On Workload) Overlap With S/R Operations
  • Re-Checks, Double Checks, Sample Size

Dynamically Optimized Fulfillment

  • Dynamic Inventory Allocation
  • Waveless Picking, Wave Picking, Overlapping Wave Picking
  • Labor Balancing, Within Zone, Between Zones, Between Areas
  • Prioritized Continuous Processing, Realtime Acceptance On New Orders
  • Optimization Of Piece Sorters, Dynamic Assignment On Item Arrival
  • Zone and Zoneless Picking, Realtime Configurable Zones, Pick Paths
  • Realtime Optimized Travel, Acceptance Of New Resources
  • Worker Interface Independent (Paper, RF, Voice, PTL)
  • Synchronized Inter-Zone Operation
  • Optimized Order Consolidation, Reduction Of Order Dwell Time

Added Value Services

  • “Work” Identifies Required Actions, Functions, Descriptions, Pictures
  • Automatic Generation Of Paper Work
  • “Work” Attached To Orders, Items
  • “Work” Assigned To Stations/Resources
  • Work Balancing Between Assigned Resources

Outbound (Shipping) Management

  • Shipment Routing, LTL, Shotgun, Priority
  • Carton, Order, Shipment Weight
  • Outbound QA Management, By User Defined % By Shipment
  • Waybill, BOL Generation
  • ASN Notification
  • Reporting System (Base)
  • Extensive Standard Reports, User Customizable Special ”Workstation Views”
  • User Defined Custom Reports As Required, Definition Of Data, View, Printed
  • Uses Powerful SQL Reporting Language For Data Definitions
  • Printed, Screen Reports Ave Separate View Definitions
  • Screen Reports Item Tagging, Linkages, Actions

Productivity And Tracking Reporting (Base)

  • Interfaces To “Office Tools”, Spreadsheets, DB etc.
  • AWMS Collects Data, Analysis Done Externally To AWMS
  • Automatic Data Aging, Data Cleanup, Data Maintained 31 Days
  • User Specified Events
  • Log of Individual Events
  • Log of Event Counts Over User Specified Periods
  • Equipment Error Event And Error Resolution Logging

Worker Authorization, Login–Logout (Base)

  • Individual Worker Authorization Levels, Restriction Of Unauthorized Action
  • Add/Delete/Modify Users and Configurable Password Aging