By possessing a record system replicated, if 1 of the replica crashes, the machine is still able to continue working by transitioning it to another look-alike. Having multiple copies assists with protecting against corrupted data
Example: If there are three copies of any data file data with all of them performed read and write operation. We can prevent failing of an individual write operation getting the value returned by the other two copies are appropriate.
Improve the performance by replicating the server as well as dividing the work. This is achieved by increasing variety of processes needed to access data been able by the server
Scaling in physical area
Client whatsoever sites can experience the improved option of replicated data. When the neighborhood duplicate of the replicated data is unavailable, the customers it's still able to access the remote backup of the data
Leading to inconsistency of files containing data
When there are multiple copies and that one copy has been modified, the duplicate changes from the other reproductions. If the duplicate is being modified and is not propagated to other copies. It will make the other copies out-dated.
Example: replication to making improvement the gain access to time of webpages. However, the users might not get the most kept up to date webpages because the webpages that are came back might be a cached version of the pages previously fetched from the net server
Cost of increased bandwidth for keeping replication
Replication of data in the data files must be kept up to date, a network often has large numbers of message flowing through when the users connect to the document data having to modify or erase data. Thus, data replication are certain to get expensive
Give at least two types of a sent out system, and describe how scalability is attended to in those systems.
An online transaction control system is scalable anticipated to it can be upgraded by adding new processors, storage area and devices to process more transactions. This can be upgraded easily and transparently without shutting the system down.
The distributed mother nature of DNS (DOMAIN System) allows working effectively even when every sponsor in the worldwide Internet are offered. Thus, it is said to scale well. DNS has the hierarchical design around administratively delegated namespaces and also the use of caching. This seek to lessen load on the root servers at the top of the namespace hierarchy as well as the successful caching limit client-perceived delays with the wide area network bandwidth consumption.
We brought up in the lectures three different techniques for redirecting clients to servers: TCP handoff, DNS-based redirection, and HTTP-based redirection. What are the main advantages and disadvantages of each technique?
The TCP handoff achieved total transparency from the customers viewpoint as it works on move level streams. Therefore, your client will not be aware of them being redirected. If they send the demands to the service machine, they will not be able to know the intermediate gateway turn them between reproductions.
The downside of TCP handoff is the fact the client will not be offer more than one copy to choose from and the redirection device remains responsible for what eventually the client requests.
TCP handoff is being cured as a redirection system as it distinguishes service based on the combo of the target machine port amount and address. Thus, if you want to replicate service, it is needed to make full duplicate on the each reproduction where in this way will lose the versatility of incomplete replication.
DNS-based Redirection achieves transparency without the loss of scalability. It achieved transparency due to the clients are appreciated to work with the provided addresses by the DNS server. It cannot create whether the addresses are from the home machine of the server or its reproductions. DNS is very reliable as a allocated name quality service.
DNS allows multiple reproductions addresses to be went back and to allow the client to choose one of them.
Another good thing about DNS is its good maintainability.
DNS queries hold no information on your client triggering the name resolution. For the service-side DNS server, it is aware of the network address of the DNS server only that enquire about the service location.
DNS cannot recognize between different services that are located on the same machine.
When a recursive query occurs, DNS server must create chain of inquiries that end at the server site DNS server. This will only let the last mentioned is aware the address of the DNS server that is a step prior to the chain rather than the origin of the created chain of inquiries. Thus, the service site DNS server doesn't have information about the positioning of the client.
HTTP-based redirection is easy to deploy. What's needed is the opportunity of serving powerful generated website pages. In addition to set-up the real content, the generator can determine an optimum reproduction which rewrite interior references that time to the look-alike.
It is proven to be efficient though it is always required to retrieve initial file from the primary server. All the further works proceeded between client and selected fake; this is likely to give optimized performance to your client.
The disadvantage is that it lacked of transparency. Receiving a URL explicitly points to certain reproduction and that the web browser will become alert to the switching between different machines.
And for scalability, the necessity of making contact with is definitely the same, the solitary service machine can make it bottleneck as the number of clients increase making situation worse.
Multicast communication identifies the delivery of a data source transmitted from a source node to an arbitrary range of vacation spot nodes.
Application-level multicast is an approach to achieving multicasting, the nodes are organise into overlay network and it is use to disseminate information to the associates. The nodes are organising into either a tree or a mesh where you will see a unique course between pair of nodes or every node will be having multiple neighbours which can mean that there are multiple paths between each set.
Having nodes organise into a mesh will be more robust due to having the chance to disseminate information without immediate reorganise the whole overlay network.
Example: Multicast tree in chord
This is because whenever a multicast meaning is send by way of a node towards the root of the tree, it looks up the info that is along the tree it needs.
In the truth of reliable FIFO-ordered multicasts, the communication covering is forced to deliver incoming information from the same process in the same order as they have been sent. What exactly are the permissible delivery orderings for the combination of FIFO and total-ordered multicasting in Physique. 8-15 (shown on the previous page of the assignment)?
Why is receiver-based meaning logging considered to be better than sender-based logging? Explain the reason why behind your answer.
The reason for this is the fact that recovery is completely local. Within the sender-based logging, a recovering process has to contact the senders to retransmit their meaning.
When a getting process crashes, most checkpoint express will be restored and replay the communication that is been delivered again. It combines checkpoint with communication logging make it better to have a state restore that is situated beyond the recent checkpoint.
As for sender-based logging, it is difficult to find recovery range as the checkpoint will cause a domino result meaning that there will be inconsistency checkpoint and cost of taking a checkpoint is high.
In final result, receiver-based subject matter logging is preferable to sender-based logging
Does the Triple Modular Redundancy Model (TMR) with the capacity of masking any type of failure? Clarify your answer.
Triple Modular Redundancy Model is not capable of masking any kind of failure. This is because TMR assumption on the voting circuit determines which replication it is at error developing a 2 to at least one 1 vote is discovered. The voting circuit will output the effect that is true and discard the erroneous one. The TMR can mask the erroneous version successfully if it's assumed to be a failure delivering itself to the machine.
Also, if there is 1 fault or more appearing at the same time in this system, TMR will not be able to mask. In addition, TMR is not able to mask efficiently if these assumptions are invalid. Thus, it is sometimes extended to QMR (Quad Modular Redundancy).
Example: if X1, X2 and X3 were to are unsuccessful all at exactly the same time, the voter will have a undefined productivity.
Compare the two-phase commit standard protocol with the three-phase commit standard protocol (section 8 in the booklet). Would it be possible to remove obstructing in a two-phase commit when the participants were to choose a new coordinator? Discuss your answer.
The blocking can never be completely eradicated. It is because after the election, the new coordinator might crash. Thus, the remaining participants will not reach a final decision because the election requires vote from the newly elected coordinator.
Why do consistent associations generally improve performance in comparison to non-persistent cable connections? Explain reasons to why persistent links are disabled on some Web machines (why would anyone want to disable consistent associations)?
The client can issue several requests without the need of waiting for the respond to the 1st submission. The server is also able to issue several demands without having to create spate interconnection for the communicate match.
It is so since when using non-persistent contacts. A separate TCP connection is establish to load every element of a Web file and when the net documents contains inlayed content such as images or multi-media content, it will become inefficient.
Also it is because some of the web servers middleware covering is fragile and struggling to deal with clients that are sending several demands. These requests will only stack up in the middleware covering that will cause response to be slow scheduled to only one 1 connection for all the requests.
Explain the difference between static content and vibrant content created by server-side CGI programs.
The difference between static web content and vibrant content is the fact:
Dynamic content is able to personalize response and providing transparency to users. Users are unable to know if HTML record is generated on demand or it is actually stored in a location.
The value can be store in data source and will be retrieved and made on demand when user requested for the principles using the CGI program.
Flexibility is provided in CGI program as it could run executable record from the server which allowed interactivity on the site. However, static web content is not able to do it.
Static web content, the users are aware that the info is stored as information provided would be the same. If multiple webpages were would have to be updated it'll be quite tedious. Lots of time is consume scheduled to each update requires retrieving of HTML documents to upgrade. When create a fresh website, time is consume.
For static content, overhead will not be generated as much as active content as CGI program will take up time and ram to create and produce end result. Whereas for the static content, it is displayed as how it is being retrieved.