You plan to deploy a federation server proxy to a server named Server2 in the perimeter network. You need to identify which value must be included in the certificate that is deployed to Server2. What should you identify? Which of the following services of Active Directory is responsible for maintaining the replication topology? You want to configure multifactor authentication MFA requiring certificate authentication.
This should be applied to all clients. You configure a physical failover cluster on four member servers that all run Microsoft Windows Server R2. Each server has the Hyper-V role configured. The cluster hosts two virtual machines VMs named Guest1 and Guest2. They are hosted on different physical nodes.
You need to configure the VMs so that they continue to be hosted on different nodes in case of a failover. Choose two. It includes two child domains named ops. You are partnering temporarily with a company whose network is configured as a UNIX realm.
Users in the partner company will need access to resources in fi-print. Users in the fi-print. You should supplement the information in this chapter with some hands-on practice so that you can develop an under- standing of how you can use these technologies to address real-world scenarios and solve problems in an advanced important server environment.
Have you read page xv? This group of servers joined through NLB is called an NLB cluster or a server farm, and each member server in the farm is usually called a host or node. The purpose of NLB is to improve both the availability and scalability of a service hosted on all the individual nodes. NLB is surprisingly easy to get up and running in a default configuration. However, for the purposes of the exam, you need to understand more than the basics about NLB.
Make sure you also learn about the advanced configuration choices for the feature, such as priority settings and all port rule settings. To each client, an NLB cluster just looks like a single server assigned one name and one address. In the most typical scenario, NLB is used to create a web farm—a group of computers running Windows Server and working to support a website or a web application.
Second, NLB supports scalability because a group of servers can handle more client requests than a single server can. And as the demand for a service such as a website grows, you can keep adding more servers to the farm so that it can handle an even greater workload. An important point to understand about NLB is that each individual client is directed to exactly one server in the NLB cluster.
The client therefore gets just the processing, memory, and storage resources of that one host only. Each node in the NLB cluster works indepen- dently without access to the resources in the other servers, and changes made on one server are not copied to other nodes in the farm. You use NLB to support what are termed state- less applications.
A man- agement tool is installed by default only when you install the associated role or feature by using the Add Roles and Features Wizard.
If you use the Install-WindowsFeature cmdlet to install a role or feature, the associated management tool is not automatically installed. To install the tool with the role or feature, use the -IncludeManagementTools option. Objective 1. This page first requires you to connect to a server on which you have installed the NLB feature.
After connecting to a server, you choose an interface on that server to use for NLB traffic. But in a production environment, you normally want to reserve for NLB a dedicated network adapter on every node and then assign these interfaces to one separate network segment that has its own connection to the local router. Whether you reserve a dedicated interface to NLB or not, the interface you do assign to NLB must be given a static address.
You will later assign this interface a second IP address that will be shared by every node in the NLB cluster. The settings on this page apply only to the local host node , not to the entire NLB cluster. The value 1 is given to the host with the high- est priority. This priority value determines which node in the NLB cluster will handle network traffic that is not load balanced in other words, not covered by the port rules you create later in the wizard.
If the host with the highest priority is not available, the host with the next highest priority handles this non-load-balanced traffic. Also known as the Host Priority setting. These dedicated IP addresses you assign to the individual hosts in an NLB cluster must all be located on one logical subnet and be reachable externally as necessary through a working routed pathway or from the local network segment.
The options are Started the default , Suspended, or Stopped. As you can see in Figure , you can also enable the option to retain the suspended state after the computer restarts. Now you get to choose the virtual IP address or addresses that will be assigned to the entire server farm as a whole.
These settings can be modified after the cluster is created. The cluster MAC address is used as a multicast address, which each host eventually translates into its own original MAC address. These port rules define which traffic will be load balanced in the NLB cluster and how it will be load-balanced. Only one port rule can ever apply to an incoming packet. One port rule is predefined, which you can see in Figure The range you define cannot overlap a range defined in another port rule.
The Multiple Host filtering mode is the default setting. Multiple Host filtering mode pro- vides both load balancing and fault tolerance for all incoming requests matching the port rule. Client requests matching the port rule are distributed among active nodes in the farm. When you choose the Multiple Host filtering mode, you need to choose an affinity setting, which determines how a client that is interacting with the cluster during a session will re- spond. Subsequent traffic from the client will be directed to any node in the cluster dependent on existing load.
The advantage of this setting is that it allows user state data to be maintained from one session to the next if this data is saved on the local node. This is the default affinity setting. For example, if a client named Client1 first connects to the NLB cluster through a proxy server named Proxy1 that is assigned the address Be aware that your choice here among these three Affinity settings can be restricted by the application you are hosting in the NLB cluster.
The Single Host filtering mode directs all matching traffic toward the host with the highest priority value. If that host fails, then the traffic is directed to the host with the next high- est priority.
You might remember that this same service is provided for traffic that does not match any port rule at all. So why bother creating a port rule in Single Host mode? The advantage of configuring a port rule in Single Host mode is that with a port rule you can later define a custom server priority for this particular traffic with the Handling Priority setting in Network Load Balancing Manager. The Timeout setting extends affinity through configuration changes in the NLB cluster up to the number of minutes specified.
If, for example, the NLB cluster is used to support a web storefront, a customer might experience the benefit of the Timeout setting by always being able to retain items in a shopping cart for the number of minutes specified.
By default, Equal is selected, which gives the node an average-weighted or proportional distribution of the network load. If you clear the Equal setting as shown in Figure , you can assign the host a greater or smaller share of the network traffic directed at the farm.
In this case, the proportion handled is determined by the local load weight divided by the total of all the load weights across the NLB cluster. The default weight is With Single Host filtering mode, the server available with the highest priority always receives the traffic specified in the port rule.
The advantage of creating for specific traffic a port rule with Single Host filtering mode enabled, as opposed to creating no port rule at all, is that with a defined port rule you can set cus- tom server priority for that traffic. The Handling Priority is where you set that custom server priority. If this value is not set here, the priority value assigned to the local host is the one set in Host Parameters for the entire cluster. Host Priority determines which server in an NLB cluster receives traffic that is not covered by a port rule.
Handling Priority is a custom server priority value used for traffic covered by a port rule but assigned Single Host filtering mode. You can add up to 16 hosts to an NLB cluster. However, the disadvantage of this procedure is that the cluster naturally cannot service client requests during the period that it is offline.
Fortunately, many applications and services hosted in NLB support a better option, called a rolling upgrade, for upgrading NLB clusters. A rolling upgrade lets you leave the NLB clus- ter online during the upgrade process.
In a rolling upgrade, you take each individual node offline, upgrade it, and then bring the node back online, one at a time. You use the Drain- stop function to take each node offline to ensure that existing connections to that host are terminated gracefully. In Network Load Balancing Manager, you can find the Drainstop func- tion on the Control Host submenu of the shortcut menu that appears when you right-click a host in the console tree. With Drainstop, the node refuses new connections and new client requests are simply directed to the nodes that remain online.
To bring each host back online after you upgrade it, use the Start function for the same host and available on the same sub- menu. You complete the process by continuing to upgrade each individual cluster host one at a time until the entire cluster is upgraded. You are the systems administrator at Tailspin Toys and you are responsible for man- aging the server infrastructure that hosts the Tailspin Toys website. The traffic to Tailspin Toys website has been gradually increasing.
Increased traffic to the website has decreased the speed at which it responds. Additionally, in the last month, the website has been offline when software updates are applied. In the past, this was considered acceptable by management, but now they want the website to be available to customers even when software updates are being applied. With the preceding information in mind, answer the following questions.
Which of the Tailspin Toys servers can you make highly available by deploying Network Load Balancing? After implementing Network Load Balancing, what function should you use to ensure that any connections to the highly available servers are terminated gracefully? Which filtering and affinity mode and option would you select to ensure that clients interact with the same IIS server during a session? Client requests received by the NLB cluster are distributed among all the hosts also called nodes when these requests match configured port rules.
Single Affinity provides client-host affinity on a per-client basis. To set a custom host priority, first create a port rule matching the desired traffic with Single Host filtering mode enabled. Then modify the Handling Priority parameter by editing the port rule in the properties of the node you want to assign the custom priority.
Objective review Answer the following questions to test your knowledge of the information in this objective. You discover that web traffic destined for the NLB cluster is distributed very unevenly among the individual NLB cluster members. Port rule settings for each node have not been modified from the defaults. You want to ensure that client web requests are distributed as evenly as possible among all 10 nodes in the NLB cluster. Which setting should you enable?
Affinity-None B. Affinity-Single C. Affinity-Network D. Your network includes an NLB cluster that is used to support an e-commerce site. Use of the site is growing. Whenever you add a new node to the NLB cluster, you receive complaints from customers that items in their shopping carts disappear.
You want to reduce the likelihood that users will experience this problem in the future. What should you do? Modify the Load Weight settings B. Enable the Single Host filtering mode C. Enable the Multiple Host filtering mode D. Modify the Timeout settings 3. You have configured an NLB cluster. You want to designate a particular server within the NLB cluster to handle all the traffic that is not caught by any port rule.
Modify the Load Weight setting B. Configure the Host Priority settings D. Configure a Handling Priority Objective 1. Unlike NLB, failover clustering is normally used to provide high availability for data that can be frequently updated by clients. Typical services hosted in failover clusters include database servers, mail servers, print servers, virtual machines hosted in Hyper-V often hosting a critical application , and file servers.
Failover clusters are one of the most advanced topics you need to learn for the exam. The services or applications configured for protection in a failover cluster are known alternately as roles, as clustered roles, as clustered services and applications, as highly available services and applications, or as services and applications configured for high availability.
The individual servers in a failover cluster are called nodes. Users experience only minimal disruption, if any, as a result of this failover process. There are important differences between NLB clusters and failover clusters.
First of all, in a failover cluster, only one server normally hosts a clustered service at a time. This fact that there is only one source of data for roles in a failover cluster prevents the pos- sibility of data inconsistency for these clustered services from client to client.
Consequently, failover clusters are especially useful to help ensure the availability of services for which clients can update data. Typical services you see hosted as roles in a failover cluster include a file server, a database server, a print server, a mail server, and even a virtual machine.
Figure illustrates the process of failover in a basic, two-node failover cluster. All compo- nents must meet the qualifications for the Certified for Windows Server or Windows Server R2 logo. An even better option for testing and learning about the feature, if you have only one physical server, is to configure two or more virtual machines as your nodes.
If the role you are clustering is a virtual machine hosted in Hyper-V, you have an additional convenient option: You can store the VM files on a Windows Server or Windows Server R2 network share. If you are new to SANs, you might want to search for basic tutorials on this technology so you can feel more confident about this topic.
Note also that Objective 2. Ideally, you should also configure redundant switches, routers, and network paths to the cluster. Understanding the software requirements of a failover cluster Windows Server or Windows Server R2 failover clusters require either the Standard or Datacenter version of Windows Server or Windows Server R2. Finally, all nodes must have installed the Failover Clustering feature.
The steps required to create a new failover cluster are less likely to appear. Still, to prepare for the exam, you really need to create your own failover cluster in a test network. Failover clusters are best under- stood when you see them in action. You can begin by creating a bare-bones failover cluster with an empty role and then configure all the required components later.
To create a failover cluster, join the servers to the appropriate AD DS domain and connect these servers to shared storage. You also need to install the Failover Clustering feature on all nodes in the cluster. When you run the tests, you simply specify the nodes you will add to the cluster. You can also run the tests later again after you create the cluster by specifying the cluster by name, instead of specifying them according to node.
This step installs the software foundation for the cluster, converts the attached storage into cluster disks, and creates a computer account in Active Directory for the cluster. The procedure is simple. With the New-Cluster cmdlet, use the -NoStorage option.
You can use an empty role to test the basic functionality of the failover cluster before you configure any components such as networking, storage, Quorum, or roles. To create an empty role in a failover cluster, select the Roles node in the console tree in Failover Cluster Manager and then click Create Empty Role in the Actions pane, as shown in Figure To do this, in the center pane of the console, select the role.
You can observe the status changes in the center pane of the snap-in as the clustered service instance is moved. If the Owner Node value changes successfully from the name of one node to another, the failover is functional in the cluster. The following sections provide a brief overview of what you need to need to understand for the exam about configuring cluster networking, storage, and Quorum.
Configuring cluster networking The cluster networking settings you need to know for the exam can be found in the cluster network properties dialog box shown in Figure You access these settings by right-clicking a particular network in the console tree of Failover Cluster Manager and then clicking properties. The heartbeat determines whether a service is still available on a given node. Active Directory Detached Clusters do not require computer objects representing the cluster to be present within Active Directory.
The key to understand- ing Active Directory Detached Clusters is that while AD DS is not required for the cluster network name, the nodes that comprise the cluster must still be members of an Active Directory domain. The benefit of this new feature is that it is possible to create failover clus- ters without requiring the permission to create computer objects within AD DS.
Microsoft recommends not using Active Directory-detached clusters in scenarios that require Kerberos authentication. This cluster type can also only be deployed using Windows PowerShell. Configuring cluster storage In the real world, configuring cluster storage is a fairly complicated topic. On the exam, however, there are only a few concepts you need to focus on: Adding disks to the cluster, understanding and configuring cluster storage pools, and understanding and configuring cluster-shared volumes.
Adding new disks to a cluster If you want to add disks to an existing failover cluster, begin by provisioning the logical disks from shared storage, such as from an iSCSI target. Once the shared disk appears in Server Manager, initialize the disk and bring it online. Next, create a volume from this disk, as shown in Figure The name of the cluster appears as a server name.
To do so, select the Disks node in the console tree below Storage and then click Add Disk in the Actions pane, as shown in Figure The disk you add should already include one or more volumes before you add it. As a final option, you can add a disk to a failover cluster by using Windows PowerShell. To do so, use the Add-ClusterDisk cmdlet. These storage pools are similar to the ones you can create for an individual server by using Storage Spaces covered in Exam As with the Storage Spaces feature, you can use these storage pools in a failover cluster as source from which you can then create virtual disks and finally volumes.
In fact, if you have a shared SAS disk array, you can use Server Manager to create the pool and use the Add Storage Pool option to add it to the machine. After you create the pool, you need to create virtual disks from the new pool and virtual volumes from the new disks before you can use the clustered storage space for host- ing your clustered workloads.
No additional layer of RAID or any disk subsystem is supported, whether internal or external. No thin provisioning.
Parity layouts are not supported. Boot disks should not be added to a clustered pool. The biggest advantage of CSVs is that they can be shared by multiple active cluster nodes at a time. This is not normally possible with shared storage. In fact, two cluster nodes cannot normally use even two separate volumes residing on the same logical disk or LUN.
CSVs achieve this shared access of volumes by separating the data from different nodes into VHD files. Within each shared volume, multiple VHDs are stored, each used as the storage for a particular role for which high availability has been configured. An example of a CSV is shown in Figure Another important use for CSVs is with live migration in failover clusters a feature also described later in this chapter.
Though CSVs are not required for live migration, they are highly recommended because they optimize the performance of the migration and reduce downtime to almost zero. How might CSVs appear on the exam? If an application, service, or virtual machine connected to the same LUN failed and needed to be moved to another node in the failover cluster, every other clustered application or virtual machine on that physical node would also need to be failed over to a new node and potentially experience some downtime.
To avoid this problem, each clustered role was typically connected to its own unique LUN as a way to isolate failures. This strategy created another problem, however: a large number of LUNs that complicated setup and administration. You can run these roles on any node in the failover cluster, and when the role fails, it can fail over to any other physical node in the cluster without affecting other roles services or applications hosted on the original node.
CSVs thus add flexibility and simplify management. Using shared virtual hard disk Shared virtual hard disk allows you to share a virtual hard disk file in. You can use these special shared virtual hard disks as shared storage for virtual machine failover clusters. For example, one shared virtual hard disk might host the disk witness and other shared virtual hard disks might host data for the highly available application.
Shared virtual hard disks substantially simplify the process of deploying guest clusters because you can use a special.
Shared virtual hard disk is a feature new to Windows Server R2 and is not available to guest failover clusters running on Windows Server Hyper-V hosts. Guest clusters running the Windows Server operating system can access shared virtual hard disks as shared storage as long as you have installed Windows Server R2 Integration Services. You can deploy a shared virtual hard disk for a Hyper-V guest failover cluster either by using cluster-shared volumes on block storage or by deploying them on a scale-out file server with SMB 3.
Configuring Quorum The Quorum configuration in a failover cluster determines the number of active, commu- nicative nodes that are required for the cluster to run. In a failover cluster, every node that remains functional and communicative with other nodes submits one vote in favor of the cluster remaining online. In the most basic Quorum configuration, called Node Majority, all votes are cast by the nodes. A witness is a shared disk or file share accessible by all nodes in the cluster and that contains a copy of the failover cluster database.
When you configure Node and File Share Majority or Node and Disk Majority as your Quorum configuration, the failover cluster can reach Quorum when only half of the nodes remain online as opposed to a clear majority , as long as they can also commu- nicate with the disk witness or file share witness. To configure a Quorum witness or to modify the default Quorum settings in a failover cluster in Failover Cluster Manager, right-click the cluster node in the console tree, click More Actions, and then click Configure Cluster Quorum Settings, as shown in Figure With this recommended setting, the number of nodes required to reach Quorum adjusts if nodes are removed from the cluster.
This difficulty stems from the fact that updating software normally requires a system restart. To maintain the availability of services hosted on failover clusters in previous versions of Windows, you needed to move all roles off one node, update the software on that node, restart the node, and then repeat the process on every other node, one at a time. Windows Server R2 failover clusters could include up to 16 nodes, so this process sometimes had to be repeated as many times.
At this point, the manual method of updating software on failover clusters is simply no longer a practical option. Instead, CAU automates the process of updating software for you. To initiate the process of updating a failover cluster, right-click the cluster in the list of servers in Server Manager and then click Update Cluster from the shortcut menu, as shown in Figure Beyond this default functionality, CAU can be extended through third-party plugins so that other software updates can also be performed.
The step just described shows how to trigger an update to a cluster manually, which might be too straightforward a task to appear on the exam. More likely, you could see a question about configuring self-updates. You can access these self-update configuration settings in Failover Cluster Manager by right-clicking the cluster name in the console tree, pointing to More Actions, and then selecting Cluster-Aware Updating, as shown in Figure You can enable self-updating on the cluster on the second Add Clustered Role page of the wiz- ard by selecting the option to add the CAU clustered role, with self-updating mode enabled shown in Figure The fourth Advanced Options page lets you change profile options, as shown in Figure These profile options let you set time boundaries for the update process and other advanced parameters.
The only way you can upgrade a failover cluster to Windows Server or Windows Server R2 is to cre- ate a new cluster with the new operating system and migrate the roles on the old cluster to it. By using the wizard, you create the new cluster, shut down the roles on the old cluster, and then use the wizard to pull the roles to the new cluster.
You are the systems administrator at tailspin toys. You are in the process of preparing a submission to the procurement committee for a seven-node Windows Server R2 cluster that will have the Hyper-V role installed.
Which Quorum model is appropriate for the cluster? Which technology can you implement to ensure that software updates are applied to cluster host nodes sequentially?
What technology could you use to provide shared storage to guest clusters? In the failover cluster, servers called nodes are connected to each other and to shared storage such as a SAN or a shared SAS disk array. When a service or application configured for high availability fails on one server, the service or application immediately starts up on another. Cluster storage pools can be configured only from a shared SAS disk array.
Additional information about eBook formats and download instructions will be provided with the product key after purchase. Home About us Contact us Help My account. Learn advanced administration and configuration tasks necessary for a Windows Server and Windows R2 infrastructure. GK Vendor M This course is part three of a series of three courses.
Through this series you will gain the skills and knowledge necessary to implement a core Windows Server , including Windows Server R2 infrastructure in an existing enterprise environment. You will learn the advanced configuration and services tasks needed to implement, manage, and maintain a Windows Server infrastructure. Get hands-on instruction and practice configuring advanced Windows Server , including Windows Server R2, services in this five-day Microsoft Official Course.
This course is part three in a series of three courses that provides the skills and knowledge necessary to implement a core Windows Server infrastructure in an existing enterprise environment. Lucie Leforestier's Book: N. Michael Brein, Ph. Patrick Randall's Book: Take to the Hills!
Paul R. Peter H. Ric Shreves's Book: Joomla! Robert B. Robert D. Robert I. Saeid Atoofi, Ph.
0コメント