Distributed Computing
Also known as Peer-to-Peer (P2P). This environment is an ad-hoc network that is generally grown
from a small group of independent computers that need to share files, resources such as printers and
network/internet connections. These have allowed small business to improve some forms of
productivity. If all is to run smoothly, this model usually needs internal technical skills, or access to
outsourced technical support.
DC Advantages
• Each user has control of their own equipment, to a reasonable degree.
• Each user can add their own programs at their own leisure.
• Sometimes cheaper up front capital cost.
DC Disadvantages
• Typical lifespan of 3 years (maybe stretch to 5 with questionable results).
• Many moving parts (fans, hard drives) which are susceptible to failure.
• Larger vulnerability to security threats (both internal & external).
• Usually has higher cost of ownership, when measured over 3 + years.
Centralized Computing
Centralized Computing takes some of the control and all of the parts easily susceptible to failure away
from the desktop appliance. All computing power, processing, program installations, back-ups and file
structures are done on the Terminal or Application Server.
CC Advantages
• Centralized Computing and file storage.
• Redundant technologies incorporated to ensure reduced downtime.
• Computer stations replaced with ThinClient appliances with no moving parts, improving
meantime before failure.
• Centralized management of all users, processes, applications, back-ups and securities.
• Usually has lower cost of ownership, when measured over 3 + years.
CC Disadvantages
• User access to soft media drives are removed.
• In the rare event of a network failure, the ThinClient Terminal may lose access to the terminal
server. If this happens, there are still means to use some resources from the local client.
Centralized Computing
In a purely Centralized model, all computing resources reside at the primary Datacenter. This includes Domain Authentication Services, email, Applications, and Share Files. Remote Sites would access these resources using Thin Client devices (as opposed to PCs) and bandwidth-friendly enablers such as Citrix XenApp, Microsoft Terminal Services, or VMware Virtual Desktop technologies (there are pros and cons to each of these, but that is a topic for a different day).
The benefits of a Centralized model are lower capital and operational cost (minimal hardware at each site), security (all data stored in a secured datacenter), less administrative overhead (fewer resources needed since all equipment is in one location), less backup complexity, and greater control over potential risk areas such as Internet access.
The downside to a Centralized model is that your remote site's WAN connection is now a major point of failure. Whether this is a point-to-point, MPLS, or VPN connection, if this link goes down, that site now has zero access to anything at the Datacenter. A backup WAN and failover capability is a must if you choose a highly Centralized computing model.
Distributed Computing
In a purely Distributed model, each site is self-sustained for the most part. While some connectivity to the primary datacenter is required, the remote site would host its own Email Server, manage its own backups, control its own Internet access, and host its own Shared Files. Application access may still rely on HQ, although many applications support this type of distributed model.
The benefit of a Distributed model is that each site can 'survive' on its own. There is no Single Point of Failure in this regard. Also, assuming that the hardware in some of the sites are stored in a secure Server Room and not with the office supplies (a big assumption in some cases, I know), this also would potentially facilitate Business Continuity by utilizing Sites that reference each other as contingency Sites.
The downside to this approach, obviously, is cost. Not only would this require additional hardware and software costs, but you most certainly would require at least a partial onsite presence at each location regardless of how many remote management components are in place. Another consideration would be the backup architecture. Unless each site had a healthy amount of bandwidth, at least the initial data backup processing would have to be handled locally before being shipped or replicated offsite.
supercomputers allows both parallel and distributed computing
hjhfjh
In centralised tasks are done by one system and in distributed tasks are shared by the many computers
clustered system: systems having many computers with shared storage and linked by a lan or network.distributed system:systems having many computers connected by a network and there is no shared storage.Distributed computing is computing done on computers connected by a network. Clusters are one type of distributed computing. MPPs are another. Grid computing is a third.
What is the difference between parallel computing and distributing computing? In the most simple form = Parallel Computing is a method where several individual (autonomous) systems (CPU's) work in tandem to resolve a common computing workload. Distributed Computing is where several dis-associated systems are working seperatly to resolve a multi-faceted computing workload. An example of Parallel computing would be two servers that share the workload of routing mail, managing connections to an accounting system or database, solving a mathematical problem, ect... Distributed Computing would be more like the SETI Program, where each client works a seperate "chunk" of information, and returns the completed package to a centralized resource that's responsible for managing the overall workload. If you think of ten men pulling on a rope to lift a load, that is parallel computing. If ten men have ten ropes and are lifting ten different loads from one place to consolidate at another place, that would be distributed computing.
between centralized and decentralized payroll
In the most simple form = Parallel Computing is a method where several individual (autonomous) systems (CPU's) work in tandem to resolve a common computing workload. Distributed Computing is where several dis-associated systems are working seperatly to resolve a multi-faceted computing workload. An example of Parallel computing would be two servers that share the workload of routing mail, managing connections to an accounting system or database, solving a mathematical problem, ect... Distributed Computing would be more like the SETI Program, where each client works a separate "chunk" of information, and returns the completed package to a centralized resource that's responsible for managing the overall workload. If you think of ten men pulling on a rope to lift a load, that is parallel computing. If ten men have ten ropes and are lifting ten different loads from one place to consolidate at another place, that would be distributed computing. In Parallel Computing all processors have access to a shared memory. In distributed computing, each processor has its own private memory
http://wiki.answers.com/Q/What_is_the_difference_between_Centralized_System_and_Distributed_System_as_far_as_operating_system_data_replicability_system_memory_and_homogeinity_are_concerned"
what is the difference between distributed and parralel processing
A distributed computing system requires each machine attached to the network to has specific software allowing them to talk to each other. A distributed virtual systems allows the machines on a network to talk to each other without the use of central software.
no answer
Stuff Stuff