The data model identifies the objects/entities involved in the application and the relationships between them. From that, the ER diagram is solidified and this forms the basis of the database design.
conceptual design is just a rough draft picture (usually many) that lead up to the basic idea you are drawing. detailed design is similar but further along the design process and more detailed , hence the name.
Software design principles represent a set of guidelines that helps us to avoid having a bad design. The design principles are associated to Robert Martin who gathered them in "Agile Software Development: Principles, Patterns, and Practices". According to Robert Martin there are 3 important characteristics of a bad design that should be avoided: * Rigidity - It is hard to change because every change affects too many other parts of the system. * Fragility - When you make a change, unexpected parts of the system break. * Immobility - It is hard to reuse in another application because it cannot be disentangled from the current application. * Software entities like classes, modules and functions should be open for extension but closed for modifications.* High-level modules should not depend on low-level modules. Both should depend on abstractions. * Abstractions should not depend on details. Details should depend on abstractions. * Clients should not be forced to depend upon interfaces that they don't use. * A class should have only one reason to change. * Derived types must be completely substitutable for their base types. * Data design - created by transforming the analysis information model (data dictionary and ERD) into data structures required to implement the software. Part of the data design may occur in conjunction with the design of software architecture. More detailed data design occurs as each software component is designed. * Architectural design - defines the relationships among the major structural elements of the software, the "design patterns" than can be used to achieve the requirements that have been defined for the system, and the constraints that affect the way in which the architectural patterns can be applied. It is derived from the system specification, the analysis model, and the subsystem interactions defined in the analysis model (DFD). * Interface design - describes how the software elements communicate with each other, with other systems, and with human users; the data flow and control flow diagrams provide much of the necessary information required. * Component-level design - created by transforming the structural elements defined by the software architecture into procedural descriptions of software components using information obtained from the process specification (PSPEC), control specification (CSPEC), and state transition diagram (STD). These models collectively form the design model, which is represented diagrammatically as a pyramid structure with data design at the base and component level design at the pinnacle. Note that each level produces its own documentation, which collectively form the design specifications document, along with the guidelines for testing individual modules and the integration of the entire package. Algorithm description and other relevant information may be included as an appendix. In order to evaluate the quality of a design (representation) the criteria for a good design should be established. Such a design should: * exhibit good architectural structure * be modular * contain distinct representations of data, architecture, interfaces, and components (modules) * lead to data structures that are appropriate for the objects to be implemented and be drawn from recognizable design patterns * lead to components that exhibit independent functional characteristics* lead to interfaces that reduce the complexity of connections between modules and with the external environment * be derived using a reputable method that is driven by information obtained during software requirements analysis These criteria are not achieved by chance. The software design process encourages good design through the application of fundamental design principles, systematic methodology and through review.
The advantages of DBMS are as follows: -Controlling redundancy -Providing storage structure for efficient query processing. -Restricting unauthorized users. -Providing concurrency. -Providing backup and recovery. -Enforcing integrity constraints. The disadvantages are as follows: -Centralization:That is use of the same program at a time by many user sometimes lead to loss of some data. -High cost of software. technical experties are required power dependency -- Reporting features like charts of a spreadsheet like Excel may not be available in RDBMS.
1. The Organization of data into information For data to be made meaningful it must have a purpose. The purpose of the stored data should reflect the purpose and type of the information system. Data needs to be processed and organised before it becomes information. Organising the data will most likely involve the processes of sorting and filtering (classifying) before it can be analysed and stored for later retrieval. Data dictionaries are used to help organise the data. 2. Ability to Analyse the Information Once the data has become information it needs to be analyzed to make the most of the information stored. Analysis of databases is done through the tools of queries and reports. You can find answer in the net.... like i did....XD
With a DBMS, you lose the intrinsic control of data management that exists in an integrated software solution. For instance, if you were to write a program in C++, you have the option to directly control how your program's data is handled by the actual memory on the host computer, how that memory is addressed and allocated and cached stored on the disk, even on a bit by bit level if you so choose. On the programming layer, you also have direct control over how different tasks will operate on your data as well. So instead of creating related tables in a relational system and using SQL to manipulate that data you can create routines that will directly manipulate the data to your exact specifications. So, instead of building tables you may store you data in a linked list or binary tree, which if you're good, could show very, very significant improvements for tasks such as searching and sorting. A DBMS does mostly the same task, however they've devised a very general way of organizing data that works generally for all problems. If you build a custom data solution instead of taking direct routes you can find ways to optimize your code in a way that cuts through the structure and performs tasks just not possible on a DBMS. However, considering the amount of work that goes into creating an integrated data solution and the strong likelihood that information theory experts/gods who actually work for database companies like Oracle are way, way better programmers then you, there is probably no significant advantage to not using a DBMS unless the ammount of data you are handleing is very small. PS: I recommend postgresql. People will try to tell you how great mysql is, but its because they're weak. PGSQL is the best DB you can get for nothing in the world. Good luck.
Hiring a Database Administrator (DBA) for your organization can bring several benefits, including improved data security, optimized database performance, efficient data management, and enhanced data backup and recovery processes. A DBA can also help in ensuring data integrity, resolving database issues promptly, and implementing best practices for database design and maintenance. Overall, having a DBA can lead to better data management and smoother operations for your organization.
When there are several instances of the same data, it is referred to as "data redundancy." This occurs when identical pieces of information are stored in multiple places, which can lead to inefficiencies and inconsistencies in data management. Redundancy can be intentional for backup purposes or unintentional due to poor database design. Reducing data redundancy is often a goal in database normalization.
The data integrity is important in a database because it assures that all data in it can be traced and link to other data. This ensures that all the data can be searched and recover. It increases the stability , the performance and the reliability of a database.
Homonyms are words that sound the same but have different meanings, while synonyms are words that have similar meanings. They should be avoided in database design because they can lead to confusion and inconsistency in data storage and retrieval. Using clear, unique terms helps maintain data integrity and improves database efficiency.
Storing the same data in two places in a database can lead to data inconsistency issues, making it challenging to maintain data integrity. It increases the risk of data redundancy, which can result in higher storage costs and potential discrepancies between the duplicated data. Additionally, updating data in one place and not the other can lead to discrepancies and inconsistencies in the information stored.
Data redundancy refers to repetitive data in the database. In a system with redundant data it is difficult to manage the relationships. Data redundancy is the result of poorly designed database. By implying proper constraints on the data it can be prevented.
Dirty data in a database management system (DBMS) refers to data that is inaccurate, incomplete, or inconsistent. This can include missing values, duplicate records, formatting errors, or outdated information. Dirty data can lead to mistakes in decision-making and analysis, so it's important to regularly clean and maintain the data in a database.
#.In the field of relational database design, normalization is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics-insertion, update, and deletion anomalies-that could lead to a loss of data integrity. ... 1.database contraints provide a way to guarantee that;-rows in a table have valid primary or unique key value. 2.-rows in a dependent table have valid foreign key values that reference rows in a parent table. 3.-individual column values are valid
List the importance of security - To prevent unauthorized data observation. - To prevent unauthorized data modification. - To ensure the data confidential. - To make sure the data integrity is preserved. - To make sure only the authorized user have access to the data.
Storing lots of information in a computerized database can lead to issues such as data breaches, data corruption, and performance degradation. It also requires proper maintenance to ensure data accuracy, security, and accessibility. Additionally, scaling a database to accommodate increasing amounts of information can be challenging and require significant resources.
Integrity problems in a database management system (DBMS) refer to issues such as data inconsistencies, duplicates, or inaccurate information that may arise due to violations of data integrity constraints. These constraints ensure the accuracy and validity of data stored in the database by enforcing rules such as unique values, referential integrity, and domain constraints. Failure to maintain data integrity can lead to errors, data corruption, and compromised reliability of the information stored in the database.
advantages of database : * data independence ( data and the program are independent , for example if the format of the data has been changed need not to change the file structure WHAT the data is re queried is enough no need to worry about HOW to retrieve the data). * data redundancy can be avoided ( in file for example take two file one is saving file and another is transaction file in both the customer address will be in both file it lead to replication of data his can be avoided in database using normalization (norm 1) * data security (database provide security for accessing of all data) * data integrity (data will be accessed my all user its centralized so accuracy(integrity) will be maintained.. disadvantage; vice versa