An information system often includes the software, the database, and the related manual processes.

Uploaded by

tywa1234

100% found this document useful (1 vote)

1K views

9 pages

Description:

hiii

Copyright

© Attribution Non-Commercial (BY-NC)

Available Formats

DOC, PDF, TXT or read online from Scribd

Share this document

Did you find this document useful?

Is this content inappropriate?

Report this Document

100% found this document useful (1 vote)

1K views9 pages

SADCW Ed Chapter 01

Uploaded by

tywa1234

Description:

hiii

Full description

Systems Analysis

Tonya Barrier, in Encyclopedia of Information Systems, 2003

VII. Summary

Systems analysis is the process by which an individual (s) studies a system such that an information system can be analyzed, modeled, and a logical alternative can be chosen. Systems analysis projects are initiated for three reasons: problems, opportunities, and directives. The people involved include systems analysts, sponsors, and users. The process by which systems are developed can be described by the systems development life cycle. The tasks, techniques, and tools used by the systems development life cycle can be referred as a methodology. There are three classifications of the methodologies: traditional, information engineering, and object-oriented. CASE tools are automated tools that support specific methodologies.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122272404001775

Systems Analysis and Model Synthesis

Donald W. Boyd, in Systems Analysis and Modeling, 2001

1.4.3 Mtm Approach

Combination 4, macro-to-micro (Mtm), stands in contrast to combination 1 (mtM) as a clear departure from tradition. For example, Mtm methodology reverses the order of the mtM FEM in producing water planning models [11]. Furthermore, the U.S. Air Force Program for Integrated Computer Aided Manufacturing utilizes a Mtm hierarchy of diagram names and node numbers called a node tree [46].

Mtm analysis starts from above with macro forms that incorporate system components. Macro-level data are a record of natural integration to this level by the physical system as it functions over time. Breakdown of the physical system from macro to micro is thus analogous to the mathematical process of differentiation, symbolized by the downward arrow of Figure 1.11. Systems domain knowledge includes expertise from both the mathematical and statistical domains.

Systems analysis conducted at any homogeneous level of detail enables synthesis of a linear systems model for that level. Thus, systems analysis and model synthesis are concurrent activities that iterate toward the micro until differentiation has produced adequate detail. Concurrent development is portrayed by the triangles of Figure 1.12 . Level of detail is a matter of choice and subject to definition. L 1 defines a first order of systems differentiation for analysis and synthesis. L 2 defines a second order of differentiation and, thus, distinguishes additional detail. Iteration continues until the desired level of detail is achieved at Ln. Consequently, combination 4 gives rise to the premise of this book, stated as follows:

Figure 1.12. Concurrent Development

The macro-to-micro approach provides a practical, effective, and natural framework for systems analysis and model synthesis.

Model synthesis is based on systems domain knowledge that may only incidentally include a data base. Although the message of this book could be expressed with mathematical rigor, theorems and proofs are left to others. Knowledge-based systems modeling is presented through concepts, principles, and examples.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780121218515500010

Systems, analysis, and synthesis

Barry DwyerBarry Dwyer, in Systems Analysis and Synthesis, 2016

Abstract

Systems, Analysis, and Synthesis describes the scope of the book and explains the difference between analysis and synthesis. It treats systems analysis as a process in which the analyst is challenged to reverse-engineer someone else’s, possibly dysfunctional, synthesis. It is more important to ask, ‘What?’ and ‘Why?’ rather than ‘How?’

Rather than design a system by guesswork, the reader is encouraged to solve problems at an abstract or conceptual level and derive a design that is guaranteed to be correct. Although an experienced designer can often find a valid design intuitively, experience may be the one thing the reader lacks.

The chapter contrasts compositional and decompositional approaches, suggesting that composing a system from parts can be simpler than decomposing a completed system.

It introduces the question of tractability, a thread that will run through the book. The book avoids, as far as possible, suggesting that the reader should solve intractable problems.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128053041000102

Systems Modeling Principles

Donald W. Boyd, in Systems Analysis and Modeling, 2001

2.1 KNOWLEDGE-BASED MODELING

Systems analysis results in the organization of a knowledge base that contains both hard and soft knowledge items. Hard knowledge consists of laws, theory, principles, system relationships, and for some systems, primary data. Soft knowledge is comprised of heuristics: the informal, experiential knowledge of one or more experts that constitutes rules of good judgment in the system domain. Each Mtm model is supported by a knowledge base that typically includes items such as these:

Physical system identification and level of circumscription

Fundamental units of analysis:

Space

Mass (or mass attribute)

Time

Number of subsystems

System schematic

System variables

Primary data

Table displaying structure of the model

Dynamic forms

Specifications

Heuristics

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780121218515500022

Inventory Systems Models: Shaping Dynamic Response

Donald W. Boyd, in Systems Analysis and Modeling, 2001

6.3.1 Ideal Profile

Systems analysis and modeling provide the means for establishing an ideal system or happy medium as a standard of performance. However, in practice, actual system performance often fails to conform to the ideal. Constraints restrict performance, resulting in a displaced ideal. Consider an ideal profile: The ideal system maintains a steady-state inventory level that just matches a given steady-state demand rate. Furthermore, the system must also be able to achieve a steady-state response that yields supply rate equal to steady-state inventory level.

To model such a system, automatic, transient response is required to counteract any positive or negative perturbation in inventory level from its steady-state value. Targeted steady-state values can be achieved via an L 1,1 model that contains two negative-feedback, goal-seeking loops. One loop makes accretion responsive to any perturbation in inventory level from steady state. The other loop makes depletion responsive to inventory level.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978012121851550006X

Power Transmission, High-Voltage

S.A. Annestrand, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

II.A System Analysis

System analysis is performed by utilities to plan and develop electric power networks. Analysis of the steady-state and transient performance of a simulated power grid and of planned additions to the grid helps determine changes needed in equipment, controls, and configuration for short-term and long-term increases in system capacity. New equipment can be simulated by means of system analysis methods, reducing uncertainty about its performance in the particular application for which its installation is planned.

A theoretical analysis of power flow in an ac electrical system can be illustrated by means of phasor diagrams. Assume, for instance, that power is flowing from a generator over a short line to a load (Fig. 1). The relationships between line current (I), voltage at the sending end (US), and voltage at the receiving end (UR) are shown in Figs. 2 and 3.

FIGURE 1. Simplified representation of a power transmission line, where US is the sending end voltage, UR the receiving end voltage, I the line current, RL the line series resistance, XL the line series reactance, j the square root of −1, ZL the line series impedance, and XC the line shunt capacitive reactance.

FIGURE 2. Transmission line voltages under normal load conditions (lagging power factor), where IP is the active line current, IQ the reactive line current, d the load angle, w the current phase angle, US the sending end voltage, UR the receiving end voltage. RL the line series resistance, j the square root of −1, and XL the line series reactance.

FIGURE 3. Transmission line voltages under light load conditions (leading power factor), where IP is the active line current, IQ the reactive line current, d the load angle, w the phase angle, US the sending end voltage, UR the receiving end voltage, RL the line series resistance, j the square root of −1, and XL the line series reactance.

Line capacitive current to ground is taken into account by introducing the capacitive reactance 2Xc at each end of the line. Figure 2 represents a condition with a typical inductive load at the end of the line, whereas Fig. 3 depicts a low-load condition. The cosine of the phase angle ϕ between the voltage and current at the load is called the power factor. An inductive circuit (Fig. 2) is said to have a lagging power factor. A capacitive circuit (Fig. 3) has a leading power factor. In general, it is desirable to operate with a high power factor. This is achieved by using shunt capacitors to boost the voltage when the current lags the voltage, and shunt reactors to reduce the voltage when the current leads the voltage. In Fig. 3, for example, the line current becomes capacitive, and the voltage at the receiving end is higher than that at the sending end. To reduce this effect, called the Ferranti rise, shunt reactors are installed at the end of the line.

The amount of power that can be transmitted over a line is determined by the range of voltages available at the sending end, the range of voltages acceptable at the receiving end, and the characteristics of the electrical circuit. Certain restrictions, such as thermal limits on equipment comprising the circuit, may place practical limits on the power transmission.

On an ac line, power transmitted Pac is approximately given by

(1)Pac=UR US/XLsinδ,

where US is the sending end voltage, UR the receiving end voltage, XL the line reactance, and δ the difference between the phase angles of these voltages, or the load angle. Power will flow over an ac line even if voltages at the sending and receiving ends have the same magnitude provided that the phase angles of the voltages at each end are different. This is not the case on a dc line; dc power flow Pdc depends on voltage differences at the sending and receiving ends, as shown by

(2)P dc=URUS−UR/RL.

In Eq. (2), UR and US represent dc voltages at the receiving and sending ends of the line, respectively, and RL is the dc line resistance.

Maximum power transfer over a lossless ac line occurs under stable conditions for a load angle of δ = 90°, but transient stability criteria and other considerations limit the value of the load angle to about 30° to 40° in practical cases.

On a heavily loaded line, voltage drop across line series impedance can cause voltage at the receiving end to fall below acceptable levels. Series capacitors are sometimes used in long transmission lines to reduce the line series impedance. This reduces the required δ, or load angle, for a particular power transfer level, as indicated by Eq. (1). A shunt capacitor at the end of the line would also help increase output voltage.

Surge impedance usually refers to a characteristic line impedance with losses neglected. Thus, the surge impedance ZS of a line is pure resistive, as defined by

(3)Z S=XLXC,

where XL and XC represent the series inductive and shunt capacitive reactances of the line.

Surge impedance loading PS of the line is defined by

(4)PS=UR2/ZS .

The ac power transmission performance, both technically and economically, is affected substantially by reactive as well as active power flow. In general, the most satisfactory operational performance occurs at or near “surge impedance,” or natural load level. At this level, the reactive power lost in the line series reactance due to current flow is balanced by the line shunt capacitance, and there is no reactive power flow at either terminal. Except for resistance effects, which are minor in EHV ac lines, the longitudinal voltage profile is in this case essentially flat.

The natural surge impedance and corresponding load level can be adjusted up or down by shunt or series (or both) capacitive and inductive reactances. In general, shunt reactance in a given circuit controls the voltage level and series reactance controls the load angle, both of which are critical to the load flow and stability. A long transmission circuit, to achieve best performance and economy, may require both series and shunt capacitive compensation at full load and shunt inductive compensation at light or no load. For example, a 50% series and 100% shunt capacitive compensated line has an adjusted surge impedance load level that is twice its uncompensated value and the load angle remains the same. At no load, however, 100% shunt inductive compensation is required to hold the voltage profile flat. In practice, 100% shunt inductive compensation is rare. Most operators insert between 50 and 75% shunt reactance and allow voltages to rise slightly with light loads.

System stability must be considered in an analysis of the power transfer capability of a transmission line, and stability factors become more and more important as power systems become more complex. Definite limits apply to the amount of power a generator can deliver or a load can draw through a line impedance. System instability results from attempts to increase generator inputs or load outputs beyond the system stability limits. Demands beyond a system's stability limits can be exceeded either gradually or suddenly, making two analyses—system steady-state stability and system transient stability—necessary for a complete system analysis.

The approximations and calculations needed for system analysis become considerably more complex as additional sources of generation and additional loads are added, and the system is integrated into a large power network. Several tools have been developed to help system planners more efficiently perform system analyses.

Analog computers are the classic equipment for analyzing electrical systems. Many are essentially scaled-down physical models of the system, complete with generators, substations, power lines and other equipment. Transient network analyzers (TNAs) are special-purpose analog models especially suitable for studies of transient performance during switching operations and fault conditions. They are composed of model elements that simulate electrical equipment and have electronic switches that simulate power circuit breakers. Lines are modeled as lumped-constant ladder networks. Modern TNAs are equipped with automatic data logging or recording systems, permitting several hundred switching operations to be made quickly, with results automatically tabulated, analyzed, and plotted. Sometimes TNAs provide limited accuracy as they represent transmission lines as discrete π sections, limiting voltage and current rise times in some situations. Direct current simulators, also special-purpose analog models, are used to study interactions between dc and ac systems and the behavior of dc systems under various operating conditions. Composed of model circuits simulating lines, substation equipment, and converter equipment and using electronic circuits, microprocessors, and computers for control, dc simulators can test full-scale control equipment for dc converter stations.

Analog computers have several disadvantages. Building the model, setting up the connections, making adjustments, and reading data on meters and oscillographs are tedious and time consuming. For those reasons, digital computers have become nearly universal tools for modeling and analyzing electric power systems. Unlike analog models, digital models are composed of subroutines that calculate numerical solutions to mathematical formulations of electrical circuit behavior at different points in the power system. Different voltage and current levels can be inserted into the model formulas, allowing analysis of transmission line behavior under a number of operating conditions quickly and easily. They permit even small utilities to perform system investigations relatively inexpensively, constrained only by the cost and, if the computer's central processing function is shared, the availability of processing time. Depending on the purpose, analytical studies of power systems can be divided into three categories.

Power flow studies simulate the steady-state performance of an existing or planned system under different operating conditions in order to identify potential operating overloads or voltage problem areas. Reactive compensation, additional equipment, and transmission lines needed can be calculated, and analyses of different equipment and line options in meeting projected demands can be performed. A typical power flow program simulates more than 2000 buses, 3000 lines, and 500 transformers. The engineering data obtained from power flow studies include the calculated magnitude and phase angle of the voltages at each bus, the magnitude of electrical currents in each line in the system, and the real and reactive power flow in each line. Modern power flow programs provide data of value to both engineers and economists. Used for financial planning, power flow studies can help determine financial requirements for additions to the power system. For engineering purposes, power flow studies flag locations of potential line and equipment overloads and other out-of-range operations by preparing exception reports, reducing the time needed for engineering and designing additions to the power grid.

Transient overvoltage studies are performed to determine the magnitude of anticipated transient overvoltages from breaker and switching operations, faults conditions, and lightning strikes, and their effects on system operations and system integrity. Transient overvoltage levels largely determine transmission line and substation insulation requirements and insulation coordination strategies. Transient overvoltages caused by switching are normally determined statistically, and overvoltage data are often presented as distribution curves.

System stability studies are performed to test system stability limits in the steady-state operating mode and under various transient conditions. Steady-state stability studies are less rigorous than transient stability studies, primarily involving an investigation of slow incremental variations around the system's equilibrium point. Transient stability studies are more involved. They determine whether a system will remain synchronized and stable following such major disturbances as system faults, sudden load changes, loss of major transmissions, or loss of generation.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122274105006086

Data Flow Diagrams

Sagjin Yoo, in Encyclopedia of Information Systems, 2003

I. Introduction

Since at least 1960, systems analysts have faced the task of describing business processes in order to build performance-enhanced information systems. This is true whether the analyst is designing a system to automate a structured task (such as order entry) or to support unstructured objectives (such as business strategy formation).

However, system analysts, at that time, have not had the proper modeling tools for designing a business process function. Unsuccessful systems are often exposed as dramatic and expensive failures and they usually have been held responsible for the organizational blame. For these reasons, systems analysts have put considerable effort into developing ways of understanding processes that can translate directly into information system specifications. These specifications, often demanding detailed descriptions of database structures, records, and fields, are explicit and structured like the entity-relationship diagram (ERD), which is not focused on the process, but on the data to describe clearly and precisely the data with which software applications will work.

The systems analysis challenge arises from applying internal organizational process knowledge to a computer-system-recognizable knowledge in order to accomplish useful work. Based on this kind of mismatch, systems design processes are difficult to describe, for at least the following reasons:

Processes tend to be understood differently by each person working within them (ask three different people how “the process works”; get five different answers).

Processes sustain many variations. An order entry process, for example, may work one way for first-time customers, one way for large customers, one way for small customers, one way for former customers, and one way for customers whose purchasing agents are personal friends of your CEO's spouse.

Processes can prove very complex. Again, an order entry process that seems quite simple at one level proves very complex in large organizations that operate through many functional departments spread across a wide geographical area.

An existing business process is not guaranteed to be optimally effective. This means that not everything in every organization works or that processes are perfectly fit to the organization. Thus, most managers look to information systems as a tool for fixing those problems that exist within current processes. In other words, they look for ways to use systems to redesign processes.

The management implications of systems design arise largely from the interaction of systems design and process design, e.g., from an in-depth understanding of how information systems can be designed to foster constructive change within organizational processes. Each influences the other: in most systems design projects, processes influence systems and vice versa. Understanding these issues at more than a superficial level can greatly improve the success of any business design, process design, or system design ideas that you have.

Most traditional system design techniques do not arise from perspectives that managers might typically use to describe a business. Instead, they are developed from a systems engineering perspective: techniques were evolved by software engineers who were trying to understand business processes in order to build information systems that worked.

This point also might seem obvious, at least until you try to build a systems prototype of any meaningful size. In such a project sooner or later a “management” perspective (e.g., a focus on the business demands associated with the process in question) will begin to conflict with an “information systems” perspective (e.g., a focus on the demands imposed by the technology that information engineers are trying to use to support the process). The business demands of the process (e.g., to deliver a product or service quickly and without error, based on often highly ambiguous customer preferences) tend to oppose the requirements of an information system (which can process data extremely quickly, but must receive that data in a highly structured, unambiguous form). For example, according to the evolution of the global connected on-line real-time environments, most customers are very sensitive to their supplier's on-line security and privacy concerns. If customers try to track their products via the Internet, the information system of the company should check the customers' identities before supporting the customers' requests through the Internet or other telecommunication technology. However, there are lots of ways to identify the customer himself utilizing fingerprint, ID, Password, etc. In this case, the managers want to provide any kind of services their customers want regardless of the system support ability, while restricting identification methods to their customers use due to instability of information technology. Those two parties should resolve the problem between the managerial and technical gap in system design and analysis. In database management systems (DBMS), most traditional DBMS such as relational, hierarchical, etc. cannot support the logical data relationship. For example, if a user inputs several constraint conditions of data, the DBMS cannot provide the correct result even though it uses a complex SQL query method, because those inputs do not have a specific range, instead having an ambiguous range of value in the case of finding the shortest path or goal programming of the management science theory.

The effect of these conflicting perspectives is illustrated in Fig. 1. It suggests that many characteristics of existing business processes can be left ambiguous, especially with regard to how those processes achieve explicit business goals.

Figure 1. Conflicts inherent in business process design and information system design. See //roger.babson.edu/Osborn/doit/readings/dfdsumry.htm.

Information systems inherently micromanage: the technology constraints imposed by hardware and database design make it impossible for them to do otherwise. It sounds overly obvious to point out that databases must have fields and those fields must be defined—but when information systems are being used to support cross-functional processes that extend beyond any but the most operational levels, the impact of such requirements forces systems analysts to reach a level of process description that many managers never attempt. The input, processing, and output requirements of information systems drive a need for understanding process characteristics at multiple levels and in great depth.

From this point of view the success of systems design hinges on finding techniques that effectively reconcile an organization's naturally ambiguous understanding of the processes that achieve strategic goals with an information system's naturally structured approach to handling data. One approach to resolving this conflict involves understanding the structures inherent in business data, and gave rise to entity-relationship (ER) modeling. Another complementary perspective focused on how data move through the tasks that make up business processes. This approach has come to be called data flow analysis (DFA), and is supported by a technique referred to as data flow diagramming (DFD). However, the DFD is more focused on the process not on the data that the DFA tries to focus on.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122272404000319

System-Level Design

Ian Grout, in Digital Systems Design with FPGAs and CPLDs, 2008

10.4.4 Translating the Design to VHDL

After system analysis of the system has been completed, the digital filter model is translated to VHDL code suitable for simulation and synthesis. This requires that the VHDL code be generated according to a set design translation in the following eight steps:

1.

Translation preparation (according to the nine steps below).

2.

Set the architecture details (according to the six steps below).

3.

Translation from Simulink® model to VHDL code by reading the Simulink® model, extracting the necessary design information, and generating the VHDL code.

4.

Generate VHDL test bench.

5.

Simulate the VHDL code and check for correct operation to validate the operation of the generated VHDL code.

6.

Synthesize the VHDL code and resimulate the design to generate a structural design based on the particular target technology.

7.

Configure the CPLD and validate the operation of the design.

8.

Use the filter.

The nine steps of translation preparation are:

1.

Identify the parts to be translated into digital (the filter).

2.

Remove any unnecessary information, leaving only the filter model.

3.

Identify the digital filter interfacing.

4.

Identify the clock and reset inputs, along with any other filter signals.

5.

Identify any external communications required.

6.

Set up the support necessary to include the translation directives (see architecture details below).

7.

Identify the technology directives (any requirements for the target technology, such as CPLD) and the synthesis tool to be used.

8.

Identify any designer directives.

9.

Determine what test circuitry is to be inserted into the design and at what stage in the design process.

The six steps to set the architecture details are:

1.

Identify the particular architecture to use.

2.

Identify the internal wordlength within the digital signal processing part of the digital core.

3.

Identify any specific circuits to avoid (e.g., specific VHDL code constructs).

4.

Identify the control signals required by the I/O.

5.

Identify the number system to use (e.g., 2s complement) in the arithmetic operations.

6.

Identify any number scaling requirements to limit the required wordlength within the design.

The model translation must initially consider which architecture to use, either a processor-based architecture running a software application (standard fixed architecture processor or a configurable processor) or a custom hardware architecture based directly on the model. This idea is shown in Figure 10.31.

Figure 10.31. Filter architecture decisions

If the translation were performed manually, this could be accomplished by visual reference to the graphical representation of the model (i.e., the block diagram). If the translation were performed automatically (by a software application), it could be accomplished using the underlying text based model (i.e., with the Simulink®.mdl file).

A fixed architecture processor is based on an existing CISC or RISC architecture, and the translation either will generate the hardware design (in HDL) and the processor microcode together, or will use an existing processor design and only generate the processor microcode. The configurable processor is a processor design that dynamically changes specific aspects of the architecture based on the particular application.

Direct mapping starts with the model as presented and directly translates its functions to a custom hardware HDL code equivalent. Customized mapping uses custom architecture based on the model, but then determines the most appropriate way to implement its functions (e.g., by using multiple multiplication blocks or a single multiplexed multiplier block) based on the application.

No matter what particular architecture is chosen, in addition to generating the required digital signal processing algorithm hardware (as identified in the system block diagram), then there would be the need to also generate the necessary interfacing signals for external circuitry such as ADCs and DACs, and the internal timing signals for the control of the signal processing operations, along with the storage and movement of data signals within the design. These interfacing and internal timing signals would need to be created by an additional circuit creating the functions of a control unit particular to the design.

In this case study, direct mapping of model functions will be considered, so the filter shown in Figure 10.28 will be translated. This requires the use of the following main functional blocks:

three integrator blocks

three differentiator blocks

one switch block

two constant values

The input is a single-bit bitstream pattern, and the output is a 16-bit pattern. The Simulink® model for the overall control system must be reviewed and should contain:

information for translation to VHDL

information not for translation to VHDL

The information not for translation to VHDL includes information such as visual attributes and software version information, which must be stripped from the representation of the model used for translation to VHDL. The Simulink® model code for the filter only is shown in Figure 10.32. This is the text description of the model shown in Figure 10.28. It consists of the blocks used, their attributes, and the interconnect between the blocks (lines). Interpreting this model requires knowledge of its model syntax and how the values that can be modified by the user are represented in the model. The syntax is readable, and the names used can be identified by comparison with the block diagram view.

Figure 10.32. Simulink® model for the CIC filter

To create a digital design to implement the filter, a control unit is needed within the design to generate the necessary timing signals to control the operation of the filter parts from master clock and reset inputs. The basic structure for this is shown in Figure 10.33.

Figure 10.33. Digital filter control

The CIC filter can be remodeled in VHDL, shown in Figure 10.34 as a structural description for the filter. Detailed operation of each of the blocks is defined in separate entity-architecture pairs.

Figure 10.34. VHDL model for the CIC filter

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780750683975000106

Database Design Case Study #1: Mighty-Mite Motors

Jan L. Harrington, in Relational Database Design and Implementation (Fourth Edition), 2016

Designing the Database

The most effective approach to the design of a database (or collection of databases) for an environment as diverse as that presented by Mighty-Mite Motors usually involves breaking the design into components indicated by the organization of the company. As the design evolves, the designer can examine the entities and the relationships to determine where parts of the organization will need to share data. Working on one portion of the design at a time also simplifies dealing with what might at first seem to be an overwhelmingly large database environment. Paying special attention for the needs for shared data helps ensure that shared data are consistent and suitable for all required uses.

A systems analysis indicates that the MMM database environment falls into the following areas:

Manufacturing (including finished goods inventory and raw materials ordering).

Sales to toy stores, and shipping of products ordered.

Purchase registrations.

Testing.

Problem handling.

Examining the Data Flows

During the systems analysis, a data flow diagram can be of enormous use in identifying where data are shared by various parts of an organization. The top-level DFD (the context diagram in Figure 13.11) actually tells us very little. It indicates that there are three sources outside the company provide data: customers (the stores to which the company sells), purchasers (the individuals who purchase products from the stores), and raw materials suppliers. Somewhere, all those data are used by a general process named “Manufacture and Sell Products” to keep the company in business.

Figure 13.11. Context DFD for Mighty-Mite Motors.

However, the level 1 DFD (Figure 13.12) is much more telling. As the data handling processes are broken down, five data stores emerge:

Figure 13.12. Level 1 DFD for Mighty-Mite Motors.

Raw materials: This data store holds both the raw materials inventory and the orders for raw materials.

Product data: The product data store contains data about the products being manufactured, product testing results, and the finished goods inventory.

Customer orders: This data store contains customer information, as well as order data.

Purchaser data: The purchaser data store contains information about the individuals who purchase products and the products they have purchased.

Problem data: This final data store contains problem reports.

As you examine the processes that interact with these five data stores, you will find a number of processes that manipulate data in more than one data store, as well as data stores that are used by more than one process:

The raw materials data store is used by the raw materials ordering and the manufacturing processes.

Product data are used by manufacturing, sales, shipping, and product registration.

Customer order data are used by sales and shipping.

The purchases data store is used by purchaser registration and problem handling.

The problem data store, used only by problem handling, is the only data store not shared by multiple processes.

The raw materials ordering process is the only process that uses only a single data store. Nonetheless, the level 1 DFD makes it very clear that there is no instance in which a single process uses a single data store without interaction with other data stores and processes. Given that each process in the DFD probably represents all or part of an application program, this suggests that the database designer should consider either a single database or a set of small databases, along with software to facilitate the interchange of data.

The DFD makes it very clear that the need for the integration of the various data stores is very strong. In addition, Mighty-Mite Motors is a relatively small business and therefore a single database that manages all needed aspects of the company will not grow unreasonably large. It will also be more cost effective and perform better than multiple databases that use some type of middleware to exchange data. Ultimately, the database designer may decide to distribute the database onto multiple servers, placing portions of it that are used most frequently in the division where that use occurs. The database design, however, will be the same, regardless of whether the final implementation is centralized or distributed. The essential decision is to create a single database rather than several smaller, interrelated databases that must exchange data.

The ER Diagram

The systems analyst preparing the requirements document for the Mighty-Mite Motors reengineering project has had two very good sources of information about exactly what needs to be stored in the database: the employees of the company and the paper documents that the company has been using. The document that is given to the database designer is therefore quite complete.

The design needs to capture all the information on the paper documents. Some documents are used only for input (for example, the product registration form or the order form). Others represent reports that an application program must be able to generate (for example, the line schedule report). Although the current documents do not necessarily represent all the outputs application programs running against the database will eventually prepare, they do provide a good starting place for the design. Whenever the designer has questions, he or she can then turn to Might-Mite’s employees for clarification.

Working from the requirements document prepared by the systems analyst, along with the paper input and output documents, the database designer puts together the ER diagram. Because there are so many entities, all of which interconnect, the diagram is very wide. It has therefore been split into three pieces so you can see it. As you look at each piece, keep in mind that entities that appear on more than one piece represent the connection between the three illustrations.

The first part (found in Figure 13.13) contains the entities for raw materials and manufacturing. This portion of the data model is dealing with three many-to-many relationships:

Figure 13.13. Might-Mite Motors ERD (part 1).

material_order to raw_material (resolved by the composite entity material_order_line),

raw_material to model (resolved by the composite entity material_needed),

manufacturing_line to model (resolved by the composite entity line_schedule).

The second portion of the ERD (Figure 13.14) contains entities for product testing and sales. (Remember that in this instance, the customers are toy stores rather than individual purchasers.) There are two many-to-many relationships:

Figure 13.14. Might-Mite Motors ERD (part II).

test_type to model (resolved by the test entity),

order to model (resolved by the order_line composite entity).

The test entity is somewhat unusual for a composite entity. It is an activity that someone performs and, as such, has an existence outside the database. It is not an entity created just to resolve a many-to-many relationship.

At this point, the diagrams become a bit unusual because of the need to keep track of individual products rather than simply groups of products of the same model. The model entity, which you first saw in Figure 13.13, represents a type of vehicle manufactured by Mighty-Mite Motors. However, the product entity, which first appears in Figure 13.14, represents a single vehicle that is uniquely identified by a serial number. This means that the relationships between an order, the line items on an order, and the models and products are more complex than for most other sales database designs.

The order and line_item entities are fairly typical. They indicate how many of a given model are required to fill a given order. The shipment entity then indicates how many of a specific model are shipped on a specific date. However, the database must also track the order in which individual products are shipped. As a result, there is a direct relationship between the product entity and the order entity in addition to the relationships between order_line and model. In this way, Mighty-Mite Motors will know exactly where each product has gone. At the same time, the company will be able to track the status of orders (in particular, how many units of each model have yet to ship).

The final portion of the ERD (Figure 13.15) deals with the purchasers and problem reports. There are two many-to-many relationships:

Figure 13.15. Mighty-Mite Motors ERD (part III).

problem_type to product (resolved with the entity problem_report),

purchase to feature (resolved with the composite entity purchase_feature).

As with the test entity that you saw earlier, the problem_report entity acts like a composite entity to resolve a many-to-many relationship, but is really a simple entity. It is an entity that has an existence outside the database and was not created simply to take care of the M:N relationship.

Note: Calling an entity “problem_report” can be a bit misleading. In this case, the word “report” does not refer to a piece of paper, but to the action of reporting a problem. A “problem_report” is therefore an activity rather than a document. In fact, the printed documentation of a problem report will probably include data from several entities, including the product, problem_report, purchase, and owner entities.

If you look closely at the diagram, you’ll notice that there is a one-to-one relationship between the product and purchase entities. The handling of the data supplied by a purchaser on the product registration card presents an interesting dilemma for a database designer. Each product will be registered by only one purchaser. (Even if the product is later sold or given to someone else, the new owner will not have a registration card to send in.) There will be only one set of registration data for each product, at first thought suggesting that all the registration data should be part of the product entity.

However, there is a lot of registration data—including one repeating group (the features for which the purchaser chose the product, represented by the feature and purchase_feature entities)—and the product is involved in a number of relationships that have nothing to do with product registration. If the DBMS has to retrieve the registration data along with the rest of the product data, database performance will suffer. It therefore makes sense in this case to keep the purchase data separate and to retrieve it only when absolutely needed.

Note: One common mistake made by novice database designers is to create an entity called “registration card.” It is important to remember that the card itself is merely an input document. What is crucial is the data the card contains and the entity that the data describe, rather than the medium on which the data are supplied.

Creating the Tables

The tables for the Mighty-Mite Motors database can come directly from the ER diagram. They are as follows:

Generating the SQL

Assuming that the designers of the Mighty-Mite Motors database are working with a CASE tool, then generating SQL statements to create the database can be automated. For example, in Figure 13.16 you will find the SQL generated by Mac A&D from the ER diagram you saw earlier in this chapter.

Figure 13.16. SQL statements needed to create the Mighty-Mite Motors database.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128043998000132

System analysis in energy engineering and ecology

Stanisław Sieniutycz, in Complexity and Complex Thermo-Economic Systems, 2020

Abstract

This chapter treats system analysis in energy engineering and ecology. It starts with basic notions of the field and information about early contributors. Next, the main aspects of system energy analysis are discussed. We also outline principles of mathematical modeling of industrial energy management. Decomposition of a global optimization problem leads to conclusions regarding diverse methodologies and their link with ecological criteria. Application of control thermodynamics for dynamical systems shows essential role of interface of energy limits, structure design, thermoeconomics, and ecology. The results contribute to system thermoeconomics and integration of heat energy. Implications of the theory are confronted with practical information based on the experience of engineers engaged in thermal and chemical engineering (the final part of the chapter).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128185940000064

Is an information systems development process that emphasizes flexibility to anticipate new requirements during development?

Agile development is a philosophy of systems development that emphasizes flexibility to handle changing requirements during the development project.

What is the name of the diagram that is often used to show the different layers of the system?

A Venn diagram is a widely used diagram style that shows the logical relation between sets, popularized by John Venn (1834–1923) in the 1880s.

Are those activities that enable a person to describe in detail how the information system will actually be implemented to provided the needed solution?

Those activities that enable a person to describe in detail how the information systems will actually be implemented are called systems analysis. ✴ The process of collecting and interpreting facts, identifying the problems is called the system analysis.

What system development activity involves defining the problem and specifying possible solution?

Systems analysis involves defining the problem, identifying its causes, specifying the solution, and identifying the information requirements that must be met by a system solution.

Toplist

Neuester Beitrag

Stichworte