How can you decorate your assignment?
You can decorate your assignment by using headings, bullet points, numbering, bold or italic text, colored text, relevant images or charts, and a consistent font style to make it visually appealing and easy to read. Be sure to follow any guidelines provided by your instructor.
From the output of the "show ip interface brief" command, you can see the IP address, interface status (up or down), protocol status (up or down), and the method for obtaining the address (manual or dynamic) for each interface on the device.
Seminar topics related with data mining?
Data Mining Seminar report
Introduction
Data mining is the process of extracting patterns from data. Data mining is becoming an increasingly important tool to transform this data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.
Data mining can be used to uncover patterns in data but is often carried out only on samples of data. The mining process will be ineffective if the samples are not a good representation of the larger body of data. Data mining cannot discover patterns that may be present in the larger body of data if those patterns are not present in the sample being "mined". Inability to find patterns may become a cause for some disputes between customers and service providers. Therefore data mining is not foolproof but may be useful if sufficiently representative data samples are collected. The discovery of a particular pattern in a particular set of data does not necessarily mean that a pattern is found elsewhere in the larger data from which that sample was drawn. An important part of the process is the verification and validation of patterns on other samples of data.
The related terms data dredging, data fishing and data snooping refer to the use of data mining techniques to sample sizes that are (or may be) too small for statistical inferences to be made about the validity of any patterns discovered (see also data-snooping bias). Data dredging may, however, be used to develop new hypotheses, which must then be validated with sufficiently large sample sets.
Read complete article from wikipedia
DataMining Overview
Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information - information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases.
Continuous Innovation
Although data mining is a relatively new term, the technology is not. Companies have used powerful computers to sift through volumes of supermarket scanner data and analyze market research reports for years. However, continuous innovations in computer processing power, disk storage, and statistical software are dramatically increasing the accuracy of analysis while driving down the cost.
Example
For example, one Midwest grocery chain used the data mining capacity of Oracle software to analyze local buying patterns. They discovered that when men bought diapers on Thursdays and Saturdays, they also tended to buy beer. Further analysis showed that these shoppers typically did their weekly grocery shopping on Saturdays. On Thursdays, however, they only bought a few items. The retailer concluded that they purchased the beer to have it available for the upcoming weekend. The grocery chain could use this newly discovered information in various ways to increase revenue. For example, they could move the beer display closer to the diaper display. And, they could make sure beer and diapers were sold at full price on Thursdays.
Data, Information, and Knowledge
Data
Data are any facts, numbers, or text that can be processed by a computer. Today, organizations are accumulating vast and growing amounts of data in different formats and different databases. This includes:
operational or transactional data such as, sales, cost, inventory, payroll, and accounting
nonoperational data, such as industry sales, forecast data, and macro economic data
meta data - data about the data itself, such as logical database design or data dictionary definitions
Information
The patterns, associations, or relationships among all this data can provide information. For example, analysis of retail point of sale transaction data can yield information on which products are selling and when.
Knowledge
Information can be converted into knowledge about historical patterns and future trends. For example, summary information on retail supermarket sales can be analyzed in light of promotional efforts to provide knowledge of consumer buying behavior. Thus, a manufacturer or retailer could determine which items are most susceptible to promotional efforts.
Data Warehouses
Dramatic advances in data capture, processing power, data transmission, and storage capabilities are enabling organizations to integrate their various databases into data warehouses. Data warehousing is defined as a process of centralized data management and retrieval. Data warehousing, like data mining, is a relatively new term although the concept itself has been around for years. Data warehousing represents an ideal vision of maintaining a central repository of all organizational data. Centralization of data is needed to maximize user access and analysis. Dramatic technological advances are making this vision a reality for many companies. And, equally dramatic advances in data analysis software are allowing users to access this data freely. The data analysis software is what supports data mining.
What can data mining do?
Data mining is primarily used today by companies with a strong consumer focus - retail, financial, communication, and marketing organizations. It enables these companies to determine relationships among "internal" factors such as price, product positioning, or staff skills, and "external" factors such as economic indicators, competition, and customer demographics. And, it enables them to determine the impact on sales, customer satisfaction, and corporate profits. Finally, it enables them to "drill down" into summary information to view detail transactional data.
With data mining, a retailer could use point-of-sale records of customer purchases to send targeted promotions based on an individual's purchase history. By mining demographic data from comment or warranty cards, the retailer could develop products and promotions to appeal to specific customer segments.
For example, Blockbuster Entertainment mines its video rental history database to recommend rentals to individual customers. American Express can suggest products to its cardholders based on analysis of their monthly expenditures.
WalMart is pioneering massive data mining to transform its supplier relationships. WalMart captures point-of-sale transactions from over 2,900 stores in 6 countries and continuously transmits this data to its massive 7.5 terabyte Teradata data warehouse. WalMart allows more than 3,500 suppliers, to access data on their products and perform data analyses. These suppliers use this data to identify customer buying patterns at the store display level. They use this information to manage local store inventory and identify new merchandising opportunities. In 1995, WalMart computers processed over 1 million complex data queries.
The National Basketball Association (NBA) is exploring a data mining application that can be used in conjunction with image recordings of basketball games. The Advanced Scout software analyzes the movements of players to help coaches orchestrate plays and strategies. For example, an analysis of the play-by-play sheet of the game played between the New York Knicks and the Cleveland Cavaliers on January 6, 1995 reveals that when Mark Price played the Guard position, John Williams attempted four jump shots and made each one! Advanced Scout not only finds this pattern, but explains that it is interesting because it differs considerably from the average shooting percentage of 49.30% for the Cavaliers during that game.
By using the NBA universal clock, a coach can automatically bring up the video clips showing each of the jump shots attempted by Williams with Price on the floor, without needing to comb through hours of video footage. Those clips show a very successful pick-and-roll play in which Price draws the Knick's defense and then finds Williams for an open jump shot.
How does data mining work?
While large-scale information technology has been evolving separate transaction and analytical systems, data mining provides the link between the two. Data mining software analyzes relationships and patterns in stored transaction data based on open-ended user queries. Several types of analytical software are available: statistical, machine learning, and neural networks. Generally, any of four types of relationships are sought:
Classes: Stored data is used to locate data in predetermined groups. For example, a restaurant chain could mine customer purchase data to determine when customers visit and what they typically order. This information could be used to increase traffic by having daily specials.
Clusters: Data items are grouped according to logical relationships or consumer preferences. For example, data can be mined to identify market segments or consumer affinities.
Associations: Data can be mined to identify associations. The beer-diaper example is an example of associative mining.
Sequential patterns: Data is mined to anticipate behavior patterns and trends. For example, an outdoor equipment retailer could predict the likelihood of a backpack being purchased based on a consumer's purchase of sleeping bags and hiking shoes.
Data mining consists of five major elements:
Extract, transform, and load transaction data onto the data warehouse system.
Store and manage the data in a multidimensional database system.
Provide data access to business analysts and information technology professionals.
Analyze the data by application software.
Present the data in a useful format, such as a graph or table.
Different levels of analysis are available:
Artificial neural networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.
Genetic algorithms: Optimization techniques that use processes such as genetic combination, mutation, and natural selection in a design based on the concepts of natural evolution.
Decision trees: Tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID) . CART and CHAID are decision tree techniques used for classification of a dataset. They provide a set of rules that you can apply to a new (unclassified) dataset to predict which records will have a given outcome. CART segments a dataset by creating 2-way splits while CHAID segments using chi square tests to create multi-way splits. CART typically requires less data preparation than CHAID.
Nearest neighbor method: A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) most similar to it in a historical dataset (where k 1). Sometimes called the k-nearest neighbor technique.
Rule induction: The extraction of useful if-then rules from data based on statistical significance.
Data visualization: The visual interpretation of complex relationships in multidimensional data. Graphics tools are used to illustrate data relationships.
What technological infrastructure is required?
Today, data mining applications are available on all size systems for mainframe, client/server, and PC platforms. System prices range from several thousand dollars for the smallest applications up to $1 million a terabyte for the largest. Enterprise-wide applications generally range in size from 10 gigabytes to over 11 terabytes. NCR has the capacity to deliver applications exceeding 100 terabytes. There are two critical technological drivers:
Size of the database: the more data being processed and maintained, the more powerful the system required.
Query complexity: the more complex the queries and the greater the number of queries being processed, the more powerful the system required.
Relational database storage and management technology is adequate for many data mining applications less than 50 gigabytes. However, this infrastructure needs to be significantly enhanced to support larger applications. Some vendors have added extensive indexing capabilities to improve query performance. Others use new hardware architectures such as Massively Parallel Processors (MPP) to achieve order-of-magnitude improvements in query time. For example, MPP systems from NCR link hundreds of high-speed Pentium processors to achieve performance levels exceeding those of the largest supercomputers.
This report is based on the report http://www.anderson.ucla.edu/
References:
1) http://wwwmaths.anu.edu.au/~steve/pdcn.pdf [PDF]
2) http://www.autonlab.org/tutorials/
3) http://technet.microsoft.com/en-us/library/ms167167.aspx
4)http://www4.stat.ncsu.edu/~dickey/Analytics/Datamine/Powerpoints/Data%20Mining%20Tutorial.ppt[PPT]
5) http://www.dsic.upv.es/~jorallo/dm/index.html
6) http://en.wikipedia.org/wiki/Data_mining
7) http://datamining.typepad.com/
The Foundations of Data Mining
Data mining techniques are the result of a long process of research and product development. This evolution began when business data was first stored on computers, continued with improvements in data access, and more recently, generated technologies that allow users to navigate through their data in real time. Data mining takes this evolutionary process beyond retrospective data access and navigation to prospective and proactive information delivery. Data mining is ready for application in the business community because it is supported by three technologies that are now sufficiently mature:
* Massive data collection
* Powerful multiprocessor computers
* Data mining algorithms
Commercial databases are growing at unprecedented rates. A recent META Group survey of data warehouse projects found that 19% of respondents are beyond the 50 gigabyte level, while 59% expect to be there by second quarter of 1996.1 In some industries, such as retail, these numbers can be much larger. The accompanying need for improved computational engines can now be met in a cost-effective manner with parallel multiprocessor computer technology. Data mining algorithms embody techniques that have existed for at least 10 years, but have only recently been implemented as mature, reliable, understandable tools that consistently outperform older statistical methods.
In the evolution from business data to business information, each new step has built upon the previous one. For example, dynamic data access is critical for drill-through in data navigation applications, and the ability to store large databases is critical to data mining. From the user's point of view, the four steps listed in Table 1 were revolutionary because they allowed new business questions to be answered accurately and quickly.
An excellent article on The Foundations of Data Mining :http://www.thearling.com/text/dmwhite/dmwhite.htm
And a detailed index on Data mining: http://www.thearling.com/index.htm
An omission error occurs when a required item or action is left out or not included. This type of error often leads to incomplete or inaccurate information. It is important to be vigilant in order to minimize omission errors, especially in critical tasks or procedures.
What is unit of information stored in a computer?
The unit of information stored in a computer is typically the bit, which represents a binary value of either 0 or 1. Data is stored and processed in computer systems using combinations of bits, with larger units such as bytes, kilobytes, and megabytes representing collections of bits.
What are the advantages of manual election?
One advantage of a manual election is that the state is spared from the costs of setting up computerized voting. Also, absence of computers avoids such data security risks as viruses and computer hackers.
To select all records from the "Persons" table where the value of the column "FirstName" starts with 'a', you can use the following SQL query:
SELECT * FROM Persons WHERE FirstName LIKE 'a%';
This query will retrieve all records where the "FirstName" column starts with the letter 'a'.
Offline website data refers to information stored on a user's device when they visit a website, often to allow the website to function properly even when the user is not connected to the internet. This data can include cached files, scripts, images, and other elements that can be accessed locally without needing to be downloaded from the web server every time.
What is a searchable organized collection of information?
A database is a searchable organized collection of information that is typically stored and accessed electronically. It allows for efficient data retrieval, storage, and management, making it easier to find specific information quickly.
Some ways of converting data into information?
The terms data and information are often used synonymously. However, many people see the distinction that data is some set of raw, unprocessed facts while information is the result of processing data and making it usable.
Since each set of data will be converted into a unique form of information, there is no general way of describing how to make this conversion.
Some examples:
A radar gun, used by law enforcement, will gather data in the form of a series of timings for how long it takes an infrared beam to bounce off of a target and return to the gun. This raw data is processed and results in information about how fast the target is moving.
Sports professionals are rated based on how well they play. In this case, raw data can itself be used as information (number of RBIs, number of rushing yards, number of goals stopped, etc.) or multiple separate sets of data can be combined to form a more generic listing of the best players.
What is the Difference between hierarchical and flat address?
Hierarchical addressing organizes addresses in a tree-like structure with levels or layers, like in IP addresses. Flat addressing treats all addresses as equal without any structure or hierarchy, like in MAC addresses.
Where will I get 2008 HSC Maharashtra board question papers?
You can try checking with your school or educational institution for past copies of the 2008 HSC Maharashtra board question papers. Alternatively, you may also find them online on educational websites or forums that provide resources for past exam papers.
Disadvantages of Hierarchical database model?
Some disadvantages of the hierarchical database model include complexity in representing certain types of relationships, limited flexibility in querying data due to its rigid structure, and potential data redundancy issues as each child can only have one parent record.
What is the System Development Life Cycle?
A System Development Life Cycle is the full sequence of events in the creation, use, maintenance and retirement of a computer system. Different service providers use different methodologies, with different naming conventions to describe the cycle.
How do you show that a grammar is LALR but not SLR?
To show that a grammar is LALR but not SLR, you can construct a parsing table for the grammar and demonstrate that there are conflicts present in the SLR parsing table that are resolved in the LALR parsing table. Specifically, LALR parsers have larger look-ahead sets than SLR parsers which can resolve these conflicts. In other words, LALR parsers are able to distinguish between more parser states compared to SLR parsers, allowing them to handle more complex grammars.
What are the advantages of computer crime?
There are no advantages to computer crime. It is illegal, unethical, and can have serious consequences for victims and perpetrators. It undermines trust in technology and can lead to financial losses, identity theft, and privacy breaches.
Computer science engineers: These folks are like the engineers and mechanics who design and build the cars from scratch, figuring out how the engine works, what makes it fast, and how to make it strong.
IT professionals: They're like the mechanics and technicians who keep the cars running smoothly. They troubleshoot problems, perform maintenance, enhance cybersecurity and ensure everything runs efficiently.
Define the two principle integrity rules for the relational modelDisscuss why it is desirable to enforce these rules also explain how DBMS enforces these integrity rules?
Who was the first women in America who get PhD degree in computer science?
Mary Kenneth Keller, B.V.M. (1913 - 1985) was an American Roman Catholic religious sister, educator, and pioneer in computer science. She was the first woman to earn a Ph.D. in computer science in the United States.
What is hardware stack and software stack?
What can do after 12Th computer science?
After completing 12th grade in computer science in India, consider pursuing a Bachelor's degree in Computer Science or related fields. Additionally, explore diploma/certification courses for specialized skills, seek internships for practical experience, and network to expand opportunities.
JNDI stands for Java Naming and Directory Interface
JNDI is an API specified in Java technology that provides naming and directory functionality to applications written in the Java programming language
Does Communications and Computer Engineeering include all the bachelor of Computer Science?
Depends on the county. Some have bachelors at Mathematics Applied to Electronics and such a bachelor is in the Computer Science category. So the answer is no, those are not the only bachelors of computer science.
The basic visual symbols in the language of art?
as a group,what are the basic visual symbols in the language of art called