Instructions for creating a cluster of several computers. desktop cluster

Instructions for creating a cluster of several computers. desktop cluster

04.04.2022

First of all, decide what components and resources will be required. You will need one master node, at least a dozen identical compute nodes, an Ethernet switch, a power distribution unit, and a rack. Determine the amount of wiring and cooling, as well as the amount of space you need. Also decide what IP addresses you want to use for nodes, what software you will install and what technologies will be required to create parallel computing power (more on this below).

  • Although the hardware is expensive, all of the software in this article is free, and most of it is open source.
  • If you want to know how fast your supercomputer could theoretically be, use this tool:

Mount the nodes. You will need to build hosts or purchase pre-built servers.

  • Choose server frames that make the most efficient use of space and energy, as well as efficient cooling.
  • Or you can “recycle” a dozen or so used servers, a few obsolete ones - and even if their weight exceeds the total weight of components, you will save a decent amount. All processors, network adapters, and motherboards must be the same for computers to work well together. Of course, don't forget RAM and hard drives for each node, and at least one optical drive for the master node.
  • Install the servers in the rack. Start at the bottom so the rack isn't overloaded at the top. You will need a friend's help - assembled servers can be very heavy, and it is quite difficult to put them in the cells on which they are supported in the rack.

    Install an Ethernet switch next to the rack. It's worth configuring the switch right away: set the jumbo frame size to 9000 bytes, set the static IP address you chose in step 1, and turn off unnecessary protocols such as SMTP.

    Install a power distribution unit (PDU, or Power Distribution Unit). Depending on the maximum load the nodes on your network are putting out, you may need 220 volts for a high performance computer.

  • When everything is set, proceed to the configuration. Linux is in fact the go-to system for high-performance (HPC) clusters - not only is it ideal for scientific computing, but you also don't have to pay to install a system on hundreds or even thousands of nodes. Imagine how much it would cost to install Windows on all nodes!

    • Start by installing the latest motherboard BIOS and vendor software, which should be the same for all servers.
    • Install your preferred Linux distribution on all nodes and the GUI distribution on the master node. Popular systems: CentOS, OpenSuse, Scientific Linux, RedHat and SLES.
    • The author highly recommends using Rocks Cluster Distribution. In addition to installing all the necessary software and tools for the cluster, Rocks provides an excellent method for quickly "porting" multiple copies of the system to similar servers using PXE boot and Red Hat's "Kick Start" procedure.
  • Install the message passing interface, resource manager, and other required libraries. If you did not install Rocks in the previous step, you will need to manually install the necessary software to set up the parallel computing logic.

    • To get started, you'll need a portable bash system, such as Torque Resource Manager, which allows you to split and distribute tasks across multiple machines.
    • Add Maui Cluster Scheduler to Torque to complete the installation.
    • Next, you need to set up a message passing interface, which is necessary for the individual processes in each individual node to share data. OpenMP is the easiest option.
    • Don't forget about multi-threaded math libraries and compilers that will "assemble" your programs for distributed computing. Did I already say that you should just install Rocks?
  • Connect computers to the network. The master node sends tasks for calculation to slave nodes, which in turn must return the result back, and also send messages to each other. And the sooner this happens, the better.

    • Use a private Ethernet network to connect all nodes in a cluster.
    • The master node can also act as an NFS, PXE, DHCP, TFTP and NTP server when connected to Ethernet.
    • You must separate this network from the public network to ensure that packets are not overlapped by others on the LAN.
  • Test the cluster. The last thing you should do before giving users access to computing power is performance testing. HPL (High Performance Lynpack) benchmark is a popular option for measuring the speed of computing in a cluster. You need to compile software from source with the highest degree of optimization that your compiler allows for the architecture you have chosen.

    • You must, of course, compile with all possible optimization settings that are available for the platform you have chosen. For example, if using an AMD CPU, compile to Open64 with an optimization level of -0.
    • Compare your results with TOP500.org to compare your cluster with the 500 fastest supercomputers in the world!
  • Introduction

    A server cluster is a group of independent servers managed by the cluster service that work together as a single system. Server clusters are created by bringing multiple Windows® 2000 Advanced Server and Windows 2000 Datacenter Server-based servers together to provide high availability, scalability, and manageability for resources and applications.

    The task of a server cluster is to provide continuous user access to applications and resources in cases of hardware or software failures or planned equipment shutdowns. If one of the cluster servers becomes unavailable due to a failure or shutdown for maintenance, information resources and applications are redistributed among the remaining available cluster nodes.

    For cluster systems, the use of the term " high availability" is preferred over using the term " fault tolerance" because fault-tolerance technologies require a higher level of hardware resilience and recovery mechanisms. As a rule, fault-tolerant servers use a high degree of hardware redundancy, plus in addition to this, specialized software that allows almost immediately recovery in the event of any single software or hardware failure. These solutions are significantly more expensive than using cluster technologies, as organizations are forced to overpay for additional hardware that is idle most of the time and is used only in case of failures. Fault-tolerant servers are used for high value transaction intensive applications such as payment processing centers, ATMs or stock exchanges.

    Although the Cluster service is not guaranteed to run non-stop, it provides a high level of availability that is sufficient to run most mission-critical applications. The Cluster service can monitor the performance of applications and resources, automatically recognizing the state of failures and recovering the system after they are resolved. This provides more flexible workload management within the cluster, and improves overall system availability.

    Key benefits of using the Cluster service:

    • High availability. If a node fails, the cluster service transfers control of resources, such as hard disks and network addresses, to the active cluster node. When a software or hardware failure occurs, the cluster software restarts the failed application on the live node, or shifts the entire load of the failed node to the remaining live nodes. In this case, users may notice only a short delay in service.
    • Return after cancellation. The Cluster service automatically redistributes the workload across the cluster when a failed node becomes available again.
    • Controllability. The Cluster Administrator is a snap-in that you can use to manage the cluster as a single system, as well as to manage applications. The cluster administrator provides a transparent view of how applications work as if they were running on the same server. You can move applications to different servers within a cluster by dragging and dropping cluster objects. In the same way, you can move data. This method can be used to manually distribute the workload of servers, as well as to offload the server and then stop it for the purpose of scheduled maintenance. In addition, the Cluster Administrator allows you to remotely monitor the state of the cluster, all its nodes and resources.
    • Scalability. To ensure that cluster performance can always keep up with growing demands, the Cluster service is designed to scale. If the overall performance of the cluster becomes insufficient to handle the load generated by the clustered applications, additional nodes can be added to the cluster.

    This document provides instructions for installing the Cluster service on servers running Windows 2000 Advanced Server and Windows 2000 Datacenter Server and describes how to install the Cluster service on cluster node servers. This guide does not cover installing and configuring clustered applications, but only walks you through the installation process of a simple two-node cluster.

    System requirements for creating a server cluster

    The following checklists will help you prepare for installation. Step by step installation instructions will be provided further after these listings.

    Software requirements

    • Microsoft Windows 2000 Advanced Server or Windows 2000 Datacenter Server operating system installed on all servers in the cluster.
    • An installed name resolution service such as Domain Naming System (DNS), Windows Internet Naming System (WINS), HOSTS, etc.
    • Terminal server for remote cluster administration. This requirement is not mandatory, but is recommended only to ensure the convenience of cluster management.

    Hardware Requirements

    • The hardware requirements for a cluster node are the same as those for installing the Windows 2000 Advanced Server or Windows 2000 Datacenter Server operating systems. These requirements can be found on the search page Microsoft directory.
    • The cluster hardware must be certified and listed on the Microsoft Cluster Service Hardware Compatibility List (HCL). The latest version of this list can be found on the search page Windows 2000 Hardware Compatibility List Microsoft directory by selecting the "Cluster" search category.

    Two HCL-qualified computers, each with:

    • A hard drive with a bootable system partition and Windows 2000 Advanced Server or Windows 2000 Datacenter Server installed. This drive must not be connected to the shared storage bus discussed below.
    • Separate PCI-controller devices optical channel (Fiber Channel) or SCSI for connecting an external shared storage device. This controller must be present in addition to the boot disk controller.
    • Two PCI network adapters installed on each computer in the cluster.
    • The external disk storage device listed in the HCL that is attached to all nodes in the cluster. It will act as a cluster disk. A configuration using hardware RAID arrays is recommended.
    • Cables for connecting a shared storage device to all computers. Refer to the manufacturer's documentation for instructions on configuring storage devices. If you are connecting to a SCSI bus, you can refer to Appendix A for more information.
    • All hardware on the cluster computers must be completely identical. This will simplify the configuration process and save you from potential compatibility issues.

    Network Configuration Requirements

    • Unique NetBIOS name for the cluster.
    • Five unique static IP addresses: two for private network adapters, two for public network adapters, and one for the cluster.
    • Domain account for the cluster service (all cluster nodes must be members of the same domain)
    • Each node must have two network adapters - one for connecting to the public network, one for intra-cluster communication of nodes. A configuration using a single network adapter to connect to a public and private network at the same time is not supported. A separate network adapter for the private network is required to comply with HCL requirements.

    Requirements for shared storage drives

    • All shared storage drives, including the quorum drive, must be physically connected to the shared bus.
    • All disks connected to the shared bus must be available to each node. This can be verified during the installation and configuration phase of the host adapter. Refer to the adapter manufacturer's documentation for detailed instructions.
    • SCSI devices must be assigned target unique SCSI ID numbers, and the terminators on the SCSI bus must be properly terminated, according to the manufacturer's instructions. one
    • All shared storage disks must be configured as basic disks (not dynamic)
    • All partitions on shared storage drives must be formatted with the NTFS file system.

    It is highly recommended that all shared storage drives be configured into hardware RAID arrays. Although not required, creating fault-tolerant RAID configurations is key to protecting against disk failures.

    Installing a cluster

    General overview of the installation

    During the installation process, some nodes will be shut down and some will be rebooted. This is necessary in order to ensure the integrity of data located on disks connected to the common bus of an external storage device. Data corruption can occur when multiple nodes simultaneously attempt to write to the same drive that is not protected by the cluster software.

    Table 1 will help you determine which nodes and storage devices must be enabled for each step of the installation.

    This guide describes how to create a two-node cluster. However, if you are setting up a cluster with more than two nodes, you can use the column value "Node 2" to determine the state of the remaining nodes.

    Table 1. Sequence of enabling devices during cluster installation

    Step Node 1 Node 2 storage device Comment
    Setting network parameters On On Off Make sure all storage devices connected to the shared bus are turned off. Turn on all nodes.
    Setting up shared drives On Off On Turn off all nodes. Power on the shared storage device, then power on the first node.
    Checking the configuration of shared drives Off On On Turn off the first node, turn on the second node. Repeat for knots 3 and 4 if necessary.
    Configuring the first node On Off On Turn off all nodes; turn on the first node.
    Configuring the Second Node On On On After successfully configuring the first node, power on the second node. Repeat for knots 3 and 4 if necessary.
    Completing the installation On On On At this point, all nodes should be enabled.

    Before installing the cluster software, you must complete the following steps:

    • Install Windows 2000 Advanced Server or Windows 2000 Datacenter Server on each computer in the cluster.
    • Configure network settings.
    • Set up shared storage drives.

    Complete these steps on each node of the cluster before installing the Cluster service on the first node.

    To configure the Cluster service on a server running Windows 2000, your account must have administrative rights on each node. All cluster nodes must be either member servers or controllers of the same domain at the same time. Mixed use of member servers and domain controllers in a cluster is not allowed.

    Installing the Windows 2000 operating system

    To install Windows 2000 on each cluster node, refer to the documentation that came with your operating system.

    This document uses the naming structure from the manual "Step-by-Step Guide to a Common Infrastructure for Windows 2000 Server Deployment". However, you can use any names.

    You must be logged in with an administrator account before starting the installation of the cluster service.

    Configuring network settings

    Note: At this point in the installation, turn off all shared storage devices, and then turn on all nodes. You must prevent multiple nodes from accessing shared storage at the same time until the Cluster service is installed on at least one of the nodes and that node is powered on.

    Each node must have at least two network adapters installed - one to connect to the public network and one to connect to the private network of the cluster nodes.

    The private network network adapter provides communication between nodes, communication of the current state of the cluster, and management of the cluster. Each node's public network adapter connects the cluster to the public network of client computers.

    Make sure all network adapters are physically connected correctly: private network adapters are only connected to other private network adapters, and public network adapters are connected to public network switches. The connection diagram is shown in Figure 1. Perform this check on each node of the cluster before proceeding to configure the shared storage drives.

    Figure 1: An example of a two-node cluster

    Configuring a private network adapter

    Perform these steps on the first node of your cluster.

    1. My network environment and select command Properties.
    2. Right click on the icon.

    Note: Which network adapter will serve the private network and which one the public network depends on the physical connection of the network cables. In this document, we will assume that the first adapter (Local Area Connection) is connected to the public network and the second adapter (Local Area Connection 2) is connected to the cluster's private network. In your case, this may not be the case.

    1. State. Window Status Local Area Connection 2 shows the connection status and its speed. If the connection is in a disconnected state, check the cables and the correct connection. Fix the issue before continuing. Click the button close.
    2. Right click on the icon again LAN connection 2, select a command Properties and press the button Tune.
    3. Select tab Additionally. The window shown in Figure 2 will appear.
    4. For private network network adapters, the speed must be set manually instead of the default value. Specify the speed of your network in the drop-down list. Don't use values "Auto Sense" or "Auto Select" to select the speed, since some network adapters may drop packets during the determination of the connection speed. To set the speed of the network adapter, specify the actual value for the parameter Connection type or Speed.

    Figure 2: Network adapter advanced settings

    All network adapters in a cluster connected to the same network must be configured in the same way and use the same parameter values duplex mode, flow control, Connection type, etc. Even if different nodes use different network equipment, the values ​​of these parameters must be the same.

    1. Select Internet Protocol (TCP/IP) in the list of components used by the connection.
    2. Click the button Properties.
    3. Set the switch to Use the following IP address and enter the address 10.1.1.1 . (For the second node, use the address 10.1.1.2 ).
    4. Set the subnet mask: 255.0.0.0 .
    5. Click the button Additionally and select the tab WINS. Set the switch value to position Disable NetBIOS over TCP/IP. Click OK to return to the previous menu. Follow this step only for the private network adapter.

    Your dialog box should look like Figure 3.

    Figure 3: Private network connection IP address

    Configuring a public network adapter

    Note: If a DHCP server is running on a public network, an IP address for the public network adapter may be assigned automatically. However, this method is not recommended for cluster node adapters. We strongly recommend that you assign permanent IP addresses to all public and private host NICs. Otherwise, if the DHCP server fails, access to the cluster nodes may not be possible. If you are forced to use DHCP for public network adapters, use long address leases to ensure that the dynamically assigned address remains valid even if the DHCP server becomes temporarily unavailable. Always assign permanent IP addresses to private network adapters. Keep in mind that the Cluster service can only recognize one network interface per subnet. If you need help with assigning network addresses in Windows 2000, see the operating system's built-in help.

    Renaming network connections

    For clarity, we recommend that you change the names of your network connections. For example, you can change the name of the connection LAN connection 2 on the . This method will help you identify networks more easily and assign their roles correctly.

    1. Right click on the icon 2.
    2. In the context menu, select the command Rename.
    3. Enter Connecting to a private cluster network in the text field and press the key ENTER.
    4. Repeat steps 1-3 and change the name of the connection LAN connection on the Connection to a public network.

    Figure 4: Renamed network connections

    1. The renamed network connections should look like Figure 4. Close the window Network and Dial-Up Networking. New network connection names are automatically replicated to other cluster nodes when they are powered up.

    Checking Network Connections and Name Resolutions

    To verify that the configured network hardware is working, complete the following steps for all network adapters in each host. To do this, you must know the IP addresses of all network adapters in the cluster. You can get this information by running the command ipconfig on each node:

    1. Click the button Start, select a team Run and type the command cmd in a text window. Click OK.
    2. Dial a team ipconfig /all and press the key ENTER. You will see information about the IP protocol setting for each network adapter on the local machine.
    3. If you don't have a command line window open yet, follow step 1.
    4. Dial a team ping ipaddress where ipaddress is the IP address of the corresponding network adapter on the other host. Assume for example that the network adapters have the following IP addresses:
    Node number Network connection name Network adapter IP address
    1 Public network connection 172.16.12.12
    1 Connecting to a private cluster network 10.1.1.1
    2 Public network connection 172.16.12.14
    2 Connecting to a private cluster network 10.1.1.2

    In this example, you need to run the commands ping 172.16.12.14 and ping 10.1.1.2 from node 1, and execute commands ping 172.16.12.12 and ping 10.1.1.1 from node 2.

    To check name resolution, run the command ping, using the computer name as the argument instead of its IP address. For example, to check name resolution for the first cluster node named hq-res-dc01, run the command ping hq-res-dc01 from any client computer.

    Domain membership check

    All nodes in the cluster must be members of the same domain and must be able to network with the domain controller and DNS server. The nodes can be configured as domain member servers or as controllers of the same domain. If you decide to make one of the nodes a domain controller, then all other nodes in the cluster must also be configured as domain controllers of the same domain. This guide assumes that all nodes are domain controllers.

    Note: For links to additional documentation on configuring domains, DNS, and DHCP services in Windows 2000, see Related Resources at the end of this document.

    1. Right click My computer and select command Properties.
    2. Select tab Network identification. In the dialog box Properties of the system You will see the full computer and domain name. In our example, the domain is called reskit.com.
    3. If you have configured the node as a member server, then you can join it to the domain at this point. Click the button Properties and follow the instructions to join the computer to the domain.
    4. close the windows Properties of the system and My computer.

    Create a Cluster Service Account

    For the cluster service, you must create a separate domain account under which it will run. The installer will require you to enter credentials for the Cluster service, so an account must be created before the service can be installed. The account must not be owned by any domain user, and must be used exclusively for running the Cluster service.

    1. Click the button Start, select a command Programs / Administration, start snap .
    2. Expand Category reskit.com if it is not already deployed
    3. Select from the list Users.
    4. Right click on Users, select from the context menu Create, select User.
    5. Enter a name for the cluster service account, as shown in Figure 5, and click Further.

    Figure 5: Adding a Cluster User

    1. Check the boxes Prevent user from changing password and Password does not expire. Click the button Further and button Ready to create a user.

    Note: If your administrative security policy does not allow the use of passwords that never expire, you will need to update the password and configure the Cluster service on each node before it expires.

    1. Right click on the user Cluster in the right toolbar Active Directory Users and Computers.
    2. In the context menu, select the command Add members to a group.
    3. Choose a group Administrators and press OK. The new account now has administrator privileges on the local computer.
    4. close snap Active Directory Users and Computers.

    Configuring Shared Storage Drives

    Warning: Ensure that at least one of the cluster nodes is running Windows 2000 Advanced Server or Windows 2000 Datacenter Server and that the Cluster service is configured and running. Only then can you boot the Windows 2000 operating system on the remaining nodes. If these conditions are not met, the cluster disks may be damaged.

    To start configuring shared storage drives, turn off all nodes. After that, turn on the shared storage device, then turn on node 1.

    Quorum Disk

    The quorum disk is used to store the checkpoints and restore log files of the cluster database, providing cluster management. We make the following recommendations for creating a quorum disk:

    • Create a small partition (at least 50MB in size) to use as the quorum disk. We generally recommend creating a 500 MB quorum disk.
    • Allocate a separate disk for the quorum resource. Since the entire cluster will fail if the quorum disk fails, we strongly recommend using a hardware RAID array.

    During the installation of the Cluster service, you will need to assign a drive letter to the quorum. In our example, we will use the letter Q.

    Configuring Shared Storage Drives

    1. Right click My computer, select a command Control. Expand the category in the window that opens. storage devices.
    2. Choose a team Disk Management.
    3. Make sure all shared storage drives are formatted with NTFS and have the status Basic. If you connect a new drive, it will automatically start Disk Signing and Update Wizard. When the wizard starts, click the button Refresh, to continue its work, after that the drive will be defined as Dynamic. To convert a disk to basic, right-click on Disk #(where # - the number of the disk you are working with) and select the command Revert to base disk.

    Right click area not allocated next to the corresponding disk.

    1. Choose a team Create section
    2. will start Partition Wizard. Double click the button Further.
    3. Enter the desired partition size in megabytes and click the button Further.
    4. Click the button Further, accepting the default drive letter
    5. Click the button Further to format and create a partition.

    Assign drive letters

    After the data bus, disks, and shared storage partitions are configured, you must assign drive letters to all partitions on all disks in the cluster.

    Note: Mount points are a file system feature that allows you to mount a file system using existing directories without assigning a drive letter. Mount points are not supported by clusters. Any external drive used as a cluster resource must be partitioned into NTFS partitions, and these partitions must be assigned drive letters.

    1. Right-click the desired partition and select command Change Drive Letter and Drive Path.
    2. Choose a new drive letter.
    3. Repeat steps 1 and 2 for all shared storage drives.

    Figure 6: Drive partitions with assigned letters

    1. At the end of the procedure, the snap window Computer management should look like Figure 6. Close the snap Computer management.
    1. Click the button Start, select Programs / Standard, and run the program Notebook".
    2. Type a few words and save the file with a name test.txt by choosing the command Save as from the menu File. close Notebook.
    3. Double click on the icon My documents.
    4. Right click on file test.txt and in the context menu select the command Copy.
    5. Close the window.
    6. open My computer.
    7. Double-click on the disk partition of the shared storage device.
    8. Right click and select command Insert.
    9. A copy of the file should appear on the shared storage drive test.txt.
    10. Double click on the file test.txt to open it from a shared storage drive. Close the file.
    11. Highlight the file and press the key Del to remove the file from the cluster disk.

    Repeat the procedure for all disks in the cluster to ensure they are accessible from the first node.

    Now turn off the first node, turn on the second node and repeat the steps of the section Checking the operation and sharing of disks. Perform the same steps on all additional nodes. Once you have verified that all nodes can read and write information to the shared storage disks, turn off all but the first node and proceed to the next section.

    tBVPFBFSH ABOUT PDOK NBYE HCE OE NAPAP
    YMY DEMBEN LMBUFFET H DPNBYOYI HUMPCHYSI.

    1. CHEDEOIE

    NOPZYE Y ChBU YNEAF H MPLBMSHOPC UEFY OEULPMSHLP Linux NBYYO, U RTBLFYUEULY CHUEZDB UCHPVPDOSHCHN RTPGEUUPTPN. fBLTSE NOPZYE UMSCHYBMY P UYUFENBI, CH LPFPTSCHI NBYYOSCH PVIEDEOSAPHUS CH PYO UHRETLPNRSHAFET. OP TEBMSHOP NBMP LFP RTPVPCHBM RTPCHPDYFSH FBLIE LURETYNEOFSHCH X UEVS ABOUT TBVPFE YMY DPNB. dBCHBKFE RPRTPVKHEN CHNEUFE UPVTBFSH OEPPMSHYPK LMBUFET. rPUFTPYCH LMBUFET CHSC UNPCEFE TEBMSHOP HULPTYFSH CHSHCHRPMOOEOYE YUBUFY BDBYu. obrtynet LPNRYMSGYA YMY PDOCHTENEOOHA TBVPFKH OEULPMSHLYI TEUKHTUPENLYI RTPGEUUPCH. h LFPC UFBFSH S RPUFBTBAUSH TBUULBBFSH CHBN LBL NPTsOP VE PUPVSCHI KHUYMYK PVYAEDEOYFSH NBYYOSCH UCHPEK MPLBMSHOPK UEFY CH EDYOSCHK LMBUFET ABOUT VBE MOSIX.

    2. LBL, UFP Y ZDE.

    MOSIX - LFP RBFYU DMS SDTB Linux U LPNRMELFPN HFYMYF, LPFPTSHK RPCHPMSEF RTPGEUUBN U CHBYEK NBYYOSCH RETEIPDYFSH (NYZTYTPCHBFSH) ABOUT DTHZYE HHMSCH MPLBMSHOPK UEFY. CHЪSFSH EZP NPTsOP RP BDTEUKH HTTP://www.mosix.cs.huji.ac.il B TBURTPUFTBOSEPHUS BY CH YUIDDOSCHI LPDBI RPD MYGEOJEK GPL. RBFUY UHEEUFCHHAF DMS CHUEI SDETH YUUFBVMSHOPK CHEFLY Linux.

    3. HUFBOPCHLB RTPZTBNNOPZP PVEUREYEOIS.

    h OBYUBME HUFBOPCHLY IPYUH RPTELPNEODPCHBFSH CHBN ЪBVYTBFSH U HMB MOSIX OE FPMSHLP EZP, OP Y UPRKHFUFCHHAEYE HFYMYFSCH - mproc, mexec Y DT.
    h BTIYCHE MOSIX EUFSH HUFBOCHPUOSCHK ULTYRF mosix_install. OBVHDSHFA h PVSBFEMSHOPN RPTSDLE TBRBRBCHBFSh Yuipdosche LPDSh SDTB Ch /usr/SRC/Linux-*.*.*, Obrtynnnet LBBM with-h /usr/src/linux-2.2.13 dbmhulbeph mosix_install y. ENH UCHPK NEOEDTSET ЪBZTKHЪLY (LILO), RHFSH L YUIPDOILBN SDTB Y KHTPCHOY BRHULP.
    RTH OBUFTPCLE SDTB CHLMAYUFE PRHYY CONFIG_MOSIX, CONFIG_BINFMT_ELF AND CONFIG_PROC_FS. CHUE LFY PRGYY RPDTPVOP PRYUBOSCH CH THLPCHPDUFCHE RP HUFBOPCLE MOSIX.
    HUFFBOCHYMY? oX UFP CE - RETEZTHTSBKFE CHBY Linux U OPCHSHCHN SDTPN, OBCHBOIE LPFPTPZP PYUEOSH VHDEF RPIPTS ABOUT mosix-2.2.13.

    4. OBUFTPKLB

    JOBYUBMSHOP HUFBOCHMEOOOSCHK MOSIX UCHETIEOOOP OE OBEF, LBLIE X CHBU NBYOSCH CH UEFY Y U LEN ENH UPEDEOSFUS. OH B OBUFTBYCHBEFUUS FFP PYUEOSH RTPUFP. eUMY CHS FPMSHLP RPUFBCHYMY mosix Y EUMY CHBY DYUFTYVHFICH - SuSE YMY RedHat - UPCNEUFYNSCHK, FP ЪBIPDYFE CH LBFBMPZ /etc/rc.d/init.d Y DBCHBKFE LPNBODH mosix start. RTY RETCHPN ЪBRHULE FFPF ULTYRF RTPUYF CHBU OBUFTPYFSH MOSIX Y BRHULFBEF FELUFPCHSCK TEBLFPPT DMS UPDBOYS JBKMB /etc/mosix.map, H LPFPTPN OBIPDYFUS URYUPL HUMBCH CHBYEZP. fHDB RTPRYUSCHCHBEN: CH UMHYUBE, EUMY H CHBU CHUEZP DCHE-FTY NBYYOSCH Y YI IP-BDTEUB UMEDHAF
    DTHZ ЪB DTHZPN RP OPNETBGYY RYYEN FBL:



    1 10.152.1.1 5

    ZDE RETCHSHCHK RBTBNEFT PVPOBYUBEF OPNET OBYUBMSHOPZP HMB, CHFPTPK - IP BDTEU RETCHPZP HMB Y RPUMEDOYK - LPMYUEUFCHP HHMPCH U FELHEEPK. f.E. UEKYUBU X OBU H LMBUPETE PMHYUBEFUS RSFSH HHMPCH, IP BDTEUB LPFPTSCHK BLBOYUYCHBAFUS ABOUT 1, 2, 3, 4 Y 5.
    yMY DTHZPK RTYNET:

    OPNET HMB IP LPMYUEUFCHP HHMPH U FELHEESP
    ______________________________________
    1 10.152.1.1 1
    2 10.150.1.55 2
    4 10.150.1.223 1

    h FFK LPOJYZHTBGYY NS RPMHYUN UMEDHAEIK TBULMBD:
    IP 1-PZP HMB 10.150.1.1
    IP 2-PZP HMB 10.150.1.55
    IP 3-PZP HMB 10.150.1.56
    IP 4-PZP HMB 10.150.1.223
    FERETSH OHTSOP OB CHUEI NBYOBI VHDHEEZP LMBUFETB HUFBOPCHYFSH MOSIX Y UPDBFSH CHEDE PDYOBLPCHSCK LPOZHJZHTGBYPOOSCHK JBKM /etc/mosix.map .

    FERETSh RPUME RETEBRHULB mosix CHBYB NBYOB HCE VKhDEF TBVPFBFSH H LMBUPETE, UFP NPTsOP HCHYDEFSH ЪBRHUFYCH NPOYFPT LPNBODPK mon. h UMHYUBE, EUMY CHSH HCHYDYFE CH NPOYFPTE FPMSHLP UCHPA NBYYOKH YMY CHPPVEE OE HCHYDYFE OILPZP, FP, LBL ZPCHPTYFUS - OBDP TSCHFSH. ULPTEE CHUEZP X CBU PYVLB YNEOOP H /etc/mosix.map.
    OH CHPF, HCHYDYMY, OP OE RPVEDYMY. uFP dbmshye? b DBMSHYE PYUEOSH RTPUFP :-) - OHTSOP UPVTBFSH HFIMYFSCH DMS TBVPFSCH U YNEOEOOOSCHN /proc Ъ RBLEFB mproc. ч ЮБУФОПУФЙ Ч ЬФПН РБЛЕФЕ ЙДЕФ ОЕРМПИБС НПДЙЖЙЛБГЙС top - mtop, Ч ЛПФПТЩК ДПВБЧЙМЙ ЧПЪНПЦОПУФШ ПФПВТБЦЕОЙС ХЪМБ(node), УПТФЙТПЧЛЙ РП ХЪМБН, РЕТЕОПУБ РТПГЕУУБ У ФЕЛХЭЕЗП ХЪМБ ОБ ДТХЗПК Й ХУФБОПЧМЕОЙС НЙОЙНБМШОПК ЪБЗТХЪЛЙ РТПГЕУУПТБ ХЪМБ, РПУМЕ ЛПФПТПК РТПГЕУУЩ ОБЮЙОБАФ НЙЗТЙТПЧБФШ ОБ ДТХЗЙЕ MOSIX - ХЪМЩ .
    ъБРХУЛБЕН mtop, ЧЩВЙТБЕН РПОТБЧЙЧЫЙКУС ОЕ УРСЭЙК РТПГЕУУ (ТЕЛПНЕОДХА ЪБРХУФЙФШ bzip) Й УНЕМП ДБЧЙН ЛМБЧЙЫХ "g" ОБ ЧБЫЕК ЛМБЧЙБФХТЕ, РПУМЕ ЮЕЗП ЧЧПДЙН ОБ ЪБРТПУ PID ЧЩВТБООПЗП Ч ЛБЮЕУФЧЕ ЦЕТФЧЩ РТПГЕУУБ Й ЪБФЕН - ОПНЕТ ХЪМБ, ЛХДБ НЩ ИПФЙН ЕЗП ПФРТБЧЙФШ. b XCE RPUME LFPZP CHOYNBFEMSHOP RPUNPFTYFE ABOUT TEEKHMSHFBFSCH, PFPVTTBTSBENSCHE LPNBODPK mon - FB NBYOB DPMTSOB OBYUBFSH VTBFSH ABOUT UEVS OBZTHЪLH CHSHCHVTBOOPZP RTPGEUUB.
    b UWUFCHEOOP mtop - H RPME #N PFPVTBTSBFS OPNET HMB, HERE ON CHSCRPMOSEPHUS.
    OP LFP EEE OE CHUE - CHEDSH CHBN RTBCHDB OE IPYUEFUS PFRTBCHMSFSH ABOUT DTHZYE HHMSCH RTPGEUUSCH CHTHYuOHA? NEW OE BIPFEMPUSH. x MOSIX EUFSH OERMPIBS CHUFTPEOOBS VBMBOUYTPCHLB CHOHFTY LMBUFETB, LPFPTBS RPCHPMSEF VPMEE-NEOEE TBCHOPNETOP TBURTEDEMSFSH OBZTHЪLH ABOUT CHUE KHMSCH. OH B CHPF ЪDEUSH OBN RTYDEFUS RPFTHDYFUS. dms OBYUBMB S TBUULBTSH, LBL UDEMBFSH FPOLHA OBUFTPCLH (tune) DMS DHHI HHMPH LMBUFETB? H RTPGEUUE LPFPTPK MOSIX RPMHYUBEF YOZHPTNBGYA P ULPTPPUFSI RTPGEUUPTPCH Y UEFY:
    BRPNOYFE TB Y OBCHUEZDB - tune NPTsOP CHSHCHRPMOSFSH FPMSHLP Ch single-mode. yOBYUE ChSCH MYVP RPMHYUFE OE UPCHUEN LPTTELFOSHCHK TEHMSHFBF, MYVP CHBYB NBYYOB NPCEF RTPUFP ЪBCHYUOHFSH.
    yFBL, CHSHCHRPMOSEN tune. rPUME RETECHPDB PRETBGYPOOPK UYUFENSCH H single - mode OBRTYNET LPNBODPK init 1 YMYY init S BRHULBEN ULTYRF prep_tune,LPFPTSCHK RPDOYNEF SEFECHSE
    YOFETJEKUSCH Y BRHUFYF MOSIX. RPUMA ьfpzp about PDOPK Yu NBOO KOBRCHULBEN TUNE, ChFPDINE ONH DTHZPZP HMB DMS Obtpkly Tsen Tehmshfbfb DPMCOSHDBTPHA OHPDOMA<ХЪЕМ>ABOUT DTHZPN HYME. uPVUFCHEOOP PRETBGYA RTYDEFUS RPCHFPTYFSH ABOUT DTHZPN HJME LPNBODPK tune -a<ХЪЕМ>. rPUME RPDPVOPZP FAOIOZB CH CHBYEK UYUFENE DPMTSEO RPSCHIFUS ZHBKM /etc/overheads, UPDETTSBEIK YOZHPTNBGYA DMS MOSIX CH CHYDE OELLYI YUYUMPCHSHCHI DBOOSCHI. h UMHYUBE, EUMY RP LBLYN-FP RTYUYOBN tune OE UNPZ UDEMBFSH EZP, RTPUFP ULPRYTHKFE Y FELHEESP LBFBMPZB ZhBKM mosix.cost H /etc/overheads. fp RPNPCEF ;-).
    rTY FAOOYOSE LMBUFETB Y VPMEE YUEN DCHHI NBYYO OHTSOP YURPMSHЪPCHBFSH HFYMYFH, LPFPTBS FBLCE RPUFBCHMSEFUS U MOSIX - tune_kernel. dBOOBS HFYMYFB RPCHPMSEF
    CHBN H VPMEE RTPUFPN Y RTCHSHCHUOPN CHYDE OBUFTPIFSH LMBUFET, PFCHEFYCH ABOUT OEULPMSHLP CHPRTPPUCH Y RTPCHEDS FAOIOZ U DCHNS NBYOBNY LMBUFETB.
    LUFBFY, RP UPVUFCHEOOPNKH PRSHCHFH NPZH ULBBFSH, UFP RTY OBUFTPKLE LMBUFETB S TELPNEODHA CHBN OE ЪBZTHTSBFSH UEFSH, B OBPVPTPF - RTYPUFBOPCHYFSH CHUE BLFICHOSCHE PRETBGYK CH MPBMSHOP.

    5. hRTBCHMEOYE LMBUFETPN

    dMS HRTBCHMEOYS HHMPN LMBUFETB UHEEUFCHHEF OEPPMSHYPK OBVPT LPNBOD, UTEDY LPFPTSCHI:

    mosctl - LPOFTPMSH OBD HIMPN. rPCHPMSEF YЪNEOSFSH RBTBNEFTSHCH HMB - FBLIE, LBL block, stay, lstay, delay Y F.D
    dBCHBKFE TBUUNPFTYN OEULPMSHLP RBTBNEFTCH LFPK HFYMYFSCH:
    stay - RPCHPMSEF PUFBOBCHMYCHBFSH NYZTBGYA RTPGEUUPCH ABOUT DTHZYE HIMSCH U FELHEEK NBYOSCH. pFNEOSEFUS RBTBNEFTPN nostay YMY -stay
    lstay - BRTEEBEF FPMSHLP MPLBMSHOSHCHN RTPGEUUBN NYZTBGYA, B RTPGEUUSCH U DTHZYI NBYYO NPZHF RTPDPMTSBFSH FFP DEMBFSH. pFNEOSEFUS RBTBNEFTPN nolstay YMYY -lstay.
    block - BRTEEBEF HDBMEOOSHCHN / ZPUFECHSHCHN RTPGEUUBN CHSHCHRPMOSFUS ABOUT FFPN HIM. pFNEOSEFUS RBTBNEFTPN noblock YMY -block.
    bring - CHPCHTBEBEF PVTBFOP CHUE RTPGEUUSCH U FELHEEZP HMB CHSHCHRPMOSENSCHE ABOUT DTHZYI NBYYOBI LMBUFETB. ffpf RBTBNEFT NPTSEF OE UTBVBFSHCHBFSH, RPLB NYZTYTPCHBCHYYK RTPGEUU OE RPMKHYUYF RTETSCHCHBOYE PF UYUFENSCH.
    setdelay HUFBOBCHMYCHBEF CHTENS, RPUME LPFPTPZP RTPGEUU OBJOBEF NYZTYTPCHBFSH.
    CHEDSH UZMBUIFEUSH - CH UMHYUBE, EUMY CHTENS CHSHCHRPMOEOIS RTPGEUUB NEOSHIE UELHODSCH UNSCHUM RETEOPUIFSH EZP ABOUT DTHZYE NBYYOSCH UEFY YUYUEBEF. yNEOOP FFP CHTENS Y CHCHUFBCHMSEFUS HFYMYFPK mosctl U RBTBNEFTPN setdecay. rTYNET:
    mosctl setdecay 1 500 200
    HUFBOBCHMYCHBEF CHTENS RETEIPDB ABOUT DTHZYE HHMSCH 500 NYMMYUELHOD CH UMHYUBE, EUMY RTPGEUU BRHEEO LBL slow Y 200 NYMYUELHOD DMS fast RTPGEUUPCH. pVTBFIFE CHOYNBOYE, UFP RBTBNEFT slow CHUEZDB DPMTSEO VSHFSH VPMSHIE YMI TBCHEO RBTBNEFTKh fast.

    mosrun - BRHULBEF RTYMPSEOYE CH LMBUPET. OBRTYNET mosrun -e -j5 make JBRHUFYF make OB 5-PN XHME LMBUFETB, RTY LFPN CHUE EZP DPUETOIE RTPGEUUSCH VHDHF FBLCE CHSHCHRPMOSFUS OB 5-PN XME. rTBCHDB ЪDEUSH EUFSH PYO OABOU, RTY YUEN DPCHPMSHOP UHEEUFCHEOOOSCHK:
    CH UMHYUBE, EUMY DPUETOYE RTPGEUUSCH CHSHCHRPMOSAFUS VSHCHUFTEE YUEN HUFBOPCMEOOOBS HFYMYFPK mosctl ЪBDETSLB (delay) FP RTPGEUU OE VHDEF NYZTYTPCHBFSH ABOUT DTHZYE HHMSCH LMBUFETB. X mosrun EEE DPCHPMSHOP NOPZP TBMYUOSCHI YOFETEUOSCHI RBTBNEFTCH, OP RPDTPVOP HOBFSH
    P OYI CHSCH UNPTSFE Y THLPCHPDUFCHB RP LFPK HFIYMYFE. (manmosrun)

    mon - ЛБЛ НЩ ХЦЕ ЪОБЕН, ЬФП НПОЙФПТ ЛМБУФЕТБ, ЛПФПТЩК Ч РУЕЧДПЗТБЖЙЮЕУЛПН ЧЙДЕ ПФПВТБЦБЕФ ЪБЗТХЪЛХ ЛБЦДПЗП ТБВПЮЕЗП ХЪМБ ЧБЫЕЗП ЛМБУФЕТБ, ЛПМЙЮЕУФЧП УЧПВПДОПК Й ЪБОСФПК РБНСФЙ ХЪМПЧ Й ЧЩДБЕФ НОПЗП ДТХЗПК, ОЕ НЕОЕЕ ЙОФЕТЕУОПК ЙОЖПТНБГЙЙ.

    mtop - NPDYZHYGYTPCHBOOBS DMS YURPMSH'CHBOYS OB HHMBI LMBUFETB CHETUYS LPNBODSCH top. pFPVTTBTSBEF ABOUT LTBOE DYOBNYUEULHA YOZHPTNBGYA P RTPGEUUBI, BRHEEOOSCHI ABOUT DBOOPN KHME, Y KHMBI, LHDB NYZTYTPCHBMY CHBY RPGEUUSCH.

    mps - FPCE NPDYZHYGYTPCHBOOBS CHETUYS LPNBODSCH ps. dPVBCHMEOP EEE PDOP RPME - OPNET HMB, ABOUT LPFPTSCHK NYZTYTPCHBM RTPGEUU.

    CHPF ABOUT NPK CHZMSD Y CHUE PUOPCHOSHE HFIMYFSHCH. ABOUT UBNPN DEME LPOEIOP NPTsOP PVPKFYUSH DBTSE VOYI. OBRTYNET JURPMSHJHS DMS LPOFTPMS OBD LMBUFETPN /proc/mosix.
    fBN LTPNE FPZP, YuFP NPTsOP OBKFY PUOPCHOKHA YOZHPTNBGYA P OBUFTPKLBI HMB, RTPGEUUBI BRHEOOOSCHI U DTHZYI HHMPCH Y F.D.,B FBLCE RPNEOSFSH YUBUFSH RBTBNEFTCH.

    6. LURETENEOFYTHEN.

    l UPTSBMEOYA, NOE OE HDBMPUSH BUFBCHYFSH CHSHCHRPMOSFUS LBLLPK-FP PYO RTPGEUU PDOCHTENEOOP ABOUT OEULPMSHLYI HMBBI. nBLUYNKHN, YuEZP S DPUFYZ H RTPGEUUE LLURETYNEOPCH U LMBUFETPN-YURPMSHЪPCHBOYE DMS CHSHCHRPMOEOIS TEUKHTUPENLYI RTPGEUUPCH ABOUT DTHZPN KHME.
    dBCHBKFE TBUUNPFTYN PYO Y RTYNETCH:
    dPRHUFYN, UFP X OBU H LMBUPETE TBVPFBAF DCHE NBYOSCH (DCHB HMB), PYO Y LPFPTSHI U OPNETPN 1 (366 Celeron), DTHZPK - U OPNETPN 5 (PIII450). LURETYNEOPHYTPCHBFSH NSCH VKHDEN ABOUT 5-PN HIM. 1-K HEM H FFP CHTENS RTPUFBYCHBM. ;-)
    йФБЛ, ЪБРХУЛБЕН ОБ 5-Н ХЪМЕ ХФЙМЙФХ crark ДМС РПДВПТБ РБТПМС Л rar БТИЙЧХ.еУМЙ ЛФП ЙЪ ЧБУ РТПВПЧБМ ТБВПФБФШ У РПДПВОЩНЙ ХФЙМЙФБНЙ, ФП ПО ДПМЦЕО ЪОБФШ, ЮФП РТПГЕУУ РПДВПТБ РБТПМС "ЛХЫБЕФ" ДП 99 РТПГЕОФПЧ РТПГЕУУПТБ. oX UFP CE - RPUME BRHULB NSCH OBVMADBEN, UFP RTPGEUU PUFBEFUS ABOUT LFPN, 5-PN XHME. tBKHNOP - CHEDSH YNEOOP X FFPZP HMB RTPYCHPDYFEMSHOPUFSH RTECHSHCHYBEF 1-K HEM RPYUFY H DCHB TBB.
    dBMEE NSC RTPUFP ЪBRHUFYMY UVPTLH kde 2.0. unNPFTYN FBVMYGHH RTPGEUUPCH Y CHYDYN, UFP crark HUREYOP NYZTYTPCHBM ABOUT 1-K HEM, PUCHPPVPDYCH RTPGEUUPT Y RBNSFSH (DB, DB - RBNSFSH FPYuOP FBLCE PUCHPPVPTSDBEFUS) DMS make. b LBL FPMSHLP make BLPOYUYM UCHPA TBVPPHH - crark CHETOHMUS PVTBFOP, ABOUT TPDOPC ENH 5-K HEM.
    YOFETEWOSCHK JZHELF RPMHYUBEFUS, EUMY crark BRHUlbfsh ABOUT VPMEE NEDMEOOPN 1-N HYME.
    fBN NSCH OBVMADBEN RTBLFYUEULY RTPFYCHPRMPTSOSCHK TEEKHMSHFBF - RTPGEUU UTBYH-TSE NYZTYTHEF OB 5-K, VPMEE VSHCHUFTSHCHK HEM. RTY LFPN PO CHPCHTBEBEFUS PVTBFOP, LPZDB IPSYO RSFPZP LPNRSHAFETTB OBYUYOBEF LBLIE-FP DECUFCHYS U UYUFENPK.

    7. YURPMSH'CHBOYE

    dBCHBKFE CH LPOGE TBVETENUS, BYUEN Y LBL NSCH NPTSEN YURPMSHЪCHBFSH LMBUFET CH UCHPEK RPCHUEDOECHOPC TSYOY.
    дМС ОБЮБМБ ОХЦОП ТБЪ Й ОБЧУЕЗДБ ЪБРПНОЙФШ - ЛМБУФЕТ ЧЩЗПДЕО ФПМШЛП Ч ФПН УМХЮБЕ, ЛПЗДБ Ч ЧБЫЕК УЕФЙ ЕУФШ ЬООПЕ ЛПМЙЮЕУФЧП НБЫЙО, ЛПФПТЩЕ ЮБУФЕОШЛП РТПУФБЙЧБАФ Й ЧЩ ИПФЙФЕ ЙУРПМШЪПЧБФШ ЙИ ТЕУХТУЩ ОБРТЙНЕТ ДМС УВПТЛЙ KDE ЙМЙ ДМС МАВЩИ УЕТШЕЪОЩИ РТПГЕУУПЧ. CHEDSH VMBZPDBTS LMBUFETH YЪ 10 NBYYO NPTsOP PDOCHTENEOOP
    LPNRYMYTPCHBFSH DP 10 FTSEMSHCHI RTPZTBNN ABOUT FPN-CE C++. yMY RPDVYTBFSH LBLPK-FP RBTPMSh,
    OE RTELTBEBS OY ABOUT UELHODH LFPZP RTPGEUUB OEBCHYUYNP PF OBZTHЪLY ABOUT CHBY LPNRSHAFET.
    dB Y CHPPVEE - FFP RTPUFP YOFETEUOP ;-).

    8. bblmayueoye

    h BLMAYUEOYE IPYUKH ULBBFSH, YuFP Ch FFK UVBFSHOE OE TBUUNPFTEOSHCHUE ChPNPTSOPUFY MOSIX, F.L. S RTPUFP DP OYI EEE OE DPVTBMUS. eUMY DPVETHUSH - TsDYFE RTPPDPMTSEOIS. :-)

    © 2022 hecc.ru - Computer technology news