What does peer to peer mean. peer-to-peer networks

What does peer to peer mean. peer-to-peer networks

31.01.2022

Working with individual cameras and entire video surveillance systems over the Internet has gained wide popularity due to a number of analytical functions and quick access to devices.

As a rule, most of the technologies that are used for this require assigning an expensive white IP address to the camera or DVR, a complex configuration procedure using UPnPct and DDNS services. An alternative to this is the use of P2P technology.

P2P (peer-to-peer) is a peer-to-peer communication protocol that is distinguished by more efficient use of the bandwidth of the signal transmission channel and high fault tolerance.

For the first time, the term peer-to-peer (Advanced Peer to Peer Networking) - extended peer-to-peer networks, was used by IBM in networks with a classic peer-to-peer architecture and peer workstations. It was used in a serverless dynamic routing process where each PC acted as both a client and a server. Now a freer version of the translation of the abbreviation sounds like “equal to equal”.

The main area of ​​application is remote video surveillance of various objects, for example:

  • open storage or construction site;
  • store or industrial premises;
  • homestead or cottage.

CCTV cameras with P2P image transmission technology are mainly used in domestic small and medium-sized private video surveillance systems, performing some functions of security and alarm systems.

The camera is identified on the Internet by a unique ID code assigned to the device by the manufacturer. Search and use is carried out with the help of special software and cloud services.

ADVANTAGES OF P2P VIDEO SURVEILLANCE

Ease of network equipment settings- the main advantage of P2P technology over other signal transmission methods. In fact, without deep knowledge of network protocols, connection and setup procedures, any user with basic Internet skills can independently organize remote video surveillance.

No binding to a static IP address. Obtaining and maintaining a static IP address can be a problem for the average user. Most providers provide Internet connection services based on dynamically changing IP addresses from a certain array.

This address for the user may change each time they log on to the network, which will require a systematic configuration of the cameras of the video surveillance system. The provider provides a white static IP address on a paid basis and this service is not cheap.

There is no dependence on distance. Video signal transmission can be carried out anywhere in the world where there is an Internet network. Image quality depends only on the channel width and stable communication.

Ability to use different devices to watch the video. To monitor the video surveillance system, it can be used as a stationary PC or laptop, as well as mobile devices: tablets, smartphones.

Affordable cost. The price of video surveillance cameras using P2P technology is not too different from the cost of conventional IP cameras with comparable technical and operational parameters.

P2P SURVEILLANCE CAMERAS

Below are the main manufacturers of P2P cameras and some of their models.

Falcon Eye is a manufacturer of equipment for video surveillance and security systems. Specializes in wireless GSM security alarm systems. It has an official representation in Russia since 2005. all the manufacturer's products, which are sold in our country, are certified and adapted to work in difficult weather conditions. Correspond to the international standard ISO - 90001.

The range of P2P surveillance cameras includes:

  • Falcon Eye FE-MTR 1300;
  • Falcon Eye FE-MTR 300 P2P;
  • Falcon Eye FE-ITR 1300.

All video cameras provide images in high resolution 1280x720, can operate at 0.1 Lux illumination and have Lan and Wi-Fi signal transmission interface (Falcon Eye FE-ITR 1300 only Lan). In addition, they are equipped with motion detection and can activate the video recording process on alarm.

Recording can be done on video recorders, in a cloud service or on a memory card. The presence of a microphone and speaker turns the camera into an interactive device for two-way conversations.

Foscam The company was founded in 2002. Specializes in the production of devices and IP cameras for GSM video surveillance. The products have been certified according to the international standard ISO 9001 and domestic state standards. The devices are equipped with a motion detector, slots for memory cards and an RJ 45 interface (twisted pair cable network connection).

Most popular models:

  • Foscam FI9821P;
  • Foscam FI9853EP;
  • Foscam FI9803EP.

Zodiac– the company offers devices for household and professional video surveillance systems. All P2P cameras are equipped with an infrared illumination system, which allows you to shoot video in the dark.

Models on the market:

  • Zodiac 909W;
  • Zodiac 911;
  • Zodiac 808 is made in an outdoor version in a housing with a degree of protection IP65.

P2P VIDEO SURVEILLANCE SETUP

Setting up a P2P video camera takes no more than 5 minutes and does not require deep knowledge of communication protocols or complex program settings. Regardless of the camera used or the selected cloud service, the setup algorithm is as follows:

1. Download and install software compatible with the operating system of the viewing device from the site of the selected cloud service.

2. The camera is installed, power is supplied to it.

3. The camera is connected to the Internet via a local wired network or via wireless means of information transmission - WiFi, GSM, etc.

4. The previously installed software is launched on the viewing device. An ID code is typed in a special search field. It can be found on the camera body or in the technical documentation. For most models, a QR code is also placed on the case, which can be scanned by a smartphone or tablet.

5. To access the camera, a standard password is typed, which then must be changed. Each manufacturer or model has its own, indicated on the box or in the device passport.

Installation of the P2P video surveillance system can be carried out without the use of cameras with integrated P2P technology. It is enough to use a DVR with this function in a conventional video surveillance system. Then, during setup, you must specify the ID of the DVR, and through its interface, access the cameras.

The DVR setup algorithm is no different from the camera setup. An example of such a device is the SPYMAX RL-2508H Light hybrid video recorder.

CLOUD SERVICES SUPPORTING P2P TECHNOLOGY

Cloud P2P service is a set of servers that provide access to devices that support the corresponding function. There are many such resources. They are divided into two types. Services developed by equipment manufacturers.

As a rule, only P2P cameras of the developer's company are supported. And universal services developed by third parties that are compatible with most devices using P2P.

For example, the Proto-X and RVi services only accept cameras and DVRs from the respective developers. Presets for quick settings are recorded at the factory during production.

Universal cloud P2P service - Easy4ip is compatible with most popular cameras.

To work with P2P cameras, you need software installed on the viewing device:

  • PSS for Windows and iOS operating system;
  • iDMSS for Apple mobile devices;
  • gDMSS for Android devices.

The use of cameras with P2P technology makes it possible to quickly install and configure an effective video surveillance system without involving expensive specialists. Various cloud services provide the user with wide functionality similar to those used in complex stationary video surveillance systems.

© 2010-2019. All rights reserved.
The materials presented on the site are for informational purposes only and cannot be used as guidance documents.

Dmitry LANDE

TOAs is often the case, it all began with an attempt to circumvent the Law. Some people have developed a desire to exchange electronically presented and copyrighted works over the Internet.
The logic of such actions is quite clear. Let's say Ivan bought a book in a store. Copyright is not violated. Ivan read the book and gave it to Peter to read. Peter didn't buy the book, the author didn't get his share of the royalties. Peter read the book, took in the information. Did he break the law? I think no. Peter handed the book over to Stephen, who in turn handed over another legally purchased book to Ivan. Three comrades, as the police say, are three defendants, but there seem to be no violations.
Let's say there are not three friends, but thirty. Has the law been broken? Each author is short of twenty-nine awards. A certain network structure is being formed, where everyone is connected to everyone, without a dedicated center. In technology, such networks are called decentralized, in contrast to centralized ones. An analogue of a centralized network is a public library. Ivan, Peter and Stephen went to the public library and read the selected books, took in the information. Does the library violate copyright? The question is not rhetorical, but very topical, widely discussed by the public. Whatever the Law says, whatever amendments are made, the history of human civilization shows that libraries are a blessing.
And now let's say that not three or thirty, but three million friends have gathered and exchange books. Where is the line, how many friends can exchange them legally, and how many cannot? And why only books, after all, the 21st century is in the yard ... Books are being converted into electronic form (and this process, apparently, is irreversible), audio books, films in modern compact formats, multimedia encyclopedias, and other programs, including operating system distributions.
In many countries today it has been concluded that public centralized digital libraries (read: "servers") of copyrighted works are outside the Law. The server has an owner, it can be easily identified, and the server can be closed.
Many remember the history of the Napster web service, the global file exchange of the late twentieth century. In June 2000, a court order was issued to close it, by which time Napster had 40 million users. The Napster service was centralized - it required a central server to run the entire system. At the same time, the revolutionary nature of the system was precisely in the elements of decentralization, its users could communicate with each other directly, providing their files for download.
To circumvent the Law, given the sad history of the Napster service, they began to develop file-sharing networks with a high degree of decentralization. Of course, you had to pay for this opportunity with functionality.
It is these systems that will be discussed below, but not so much about those aspects that still seem illegal, but about the possibilities that turned out to be quite legal and widely demanded. Along with some shortcomings of the decentralized approach to the organization of information networks, such advantages were discovered that led to their widespread use in defense, public administration, science and business.
So, we will talk about decentralized or peer-to-peer networks (from the English peer-to-peer, P2P - one on one) - computer networks based on the equality of participants. In such networks, there are no dedicated servers, and each node (peer) is both a client and a server. In practice, peer-to-peer networks consist of nodes, each of which interacts only with a certain subset of other nodes (due to limited resources). Unlike the "client-server" architecture, such an organization allows the network to remain operational with any number and any combination of available nodes.
Today, peer-to-peer networks have developed so much that the WWW is no longer the largest information network in terms of resources and generated Internet traffic. It is known that traffic, the volume of information resources (in bytes), the number of peer-to-peer network nodes, if considered in aggregate, are in no way inferior to the WWW network. What's more, peer-to-peer network traffic accounts for 70% of all Internet traffic (Figure 1)! At the same time, two very important aspects can be noted: firstly, very little is written about peer-to-peer networks in the scientific literature, and secondly, the problems of finding and vulnerability of peer-to-peer networks, as the largest “blank spot” of modern communications, are still open.

Client-server and P2P

The centralized client-server architecture implies that the network depends on central nodes (servers) that provide the terminals (ie clients) connected to the network with the necessary services. In this architecture, a key role is assigned to the servers that define the network, regardless of the presence of clients. It is obvious that the growth in the number of clients in the "client-server" network leads to an increase in the load on the server part. Thus, at a certain level of network development, it may be overloaded.
The P2P architecture, like the client-server architecture, is also distributed. A distinctive feature of P2P is that it is a decentralized architecture where there are no concepts of "client" and "server". Each entity in the network (peer) has the same status, which allows it to perform both client and server functions. Despite the fact that all nodes have the same status, their actual capabilities can vary significantly. Quite often, peer-to-peer networks are supplemented by dedicated servers that carry organizational functions, such as authorization.
A decentralized peer-to-peer network, unlike a centralized one, becomes more productive as the number of nodes connected to it increases. Indeed, each node adds its resources (disk space and computing capabilities) to the P2P network, as a result, the total network resources increase.

Areas of use

There are several applications for peer-to-peer networks that explain their growing popularity. Let's name some of them.

    File sharing. P2P is an alternative to FTP archives, which are losing perspective due to significant information overload.

    Distributed Computing. For example, a P2P project like [email protected](Distributed Search for Extraterrestrials) has demonstrated enormous computational potential for parallelizable tasks. At the moment, more than three million users participate in it free of charge.

    Message exchange. As you know, ICQ is a P2P project.

    Internet telephony.

    Group work. Today, group work networks such as the Groove Network (a secure space for communications) and OpenCola (information search and link exchange) have been implemented.

There are many areas where P2P technology is successfully applied, for example, parallel programming, data caching, data backup.
It is well known that the Domain Name System (DNS) on the Internet is also actually a data exchange network built on the P2P principle.
The most popular Internet telephony service is Skype (www.skype.com), created in 2003 by Swede Niklas Zennström and Dane Janus Friis, the authors of the well-known KaZaA peer-to-peer network. Built on a P2P architecture, Skype now has over 10 million users. Skype is currently owned by eBay, which purchased it for $2.5 billion.

Rice. one. Distribution of Internet traffic by protocols (a) and between P2P networks (b) (data for Germany, 2007)

The implementation of P2P technology is also the currently popular system of distributed computing GRID. Another example of distributed computing is the distributed.net project, whose members are legally cracking cryptographic ciphers to test their reliability.

P2P standardization

P2P is not only a network, but also a network protocol that provides the ability to create and operate a network of peers and their interaction. A set of nodes united in a single system and interacting in accordance with the P2P protocol form a peer-to-peer network. To implement the P2P protocol, client programs are used that provide the functionality of both individual nodes and the entire peer-to-peer network.
P2P refers to the application layer of network protocols and is an overlay network that uses the existing transport protocols of the TCP / IP stack - TCP or UDP. Several fundamental documents of the Internet network - RFC are devoted to the P2P protocol (in particular, the last one dates back to 2008 - RFC 5128 State of Peer-to-Peer (P2P) Communication across Network Address Translators).
Currently, a variety of methodologies and approaches are used in the implementation of peer-to-peer networks. In particular, Microsoft has developed protocols for Scribe and Pastry P2P networks. Support for Peer Name Resolution Protocol (PNRP), also related to P2P systems, was included in Windows Vista.
One of the successful attempts to standardize P2P protocols was made by Sun Microsystems as part of the JXTA project. This project is implemented with the aim of unified creation of P2P networks for various platforms. The purpose of the JXTA project is to develop standard infrastructure solutions and how to use them when creating P2P applications for working in heterogeneous environments.
The JXTA project defines six protocols upon which application systems can be built:

    Peer Discovery Protocol (PDP). Nodes use this protocol to find all open JXTA resources. The low-level PDP provides the basic search mechanisms. Application systems may include their own high-level search mechanisms that are implemented on top of the PDP.

    Peer Resolver Protocol (PRP). This protocol standardizes the format of requests for access to resources and services. When implementing this protocol, a request can be sent from the node and a response can be received.

    Peer Information Protocol (PIP). PIP is used to determine the state of a node in a JXTA network. A node receiving a PIP message MAY send a full or abbreviated status response, or ignore the message.

    Peer Membership Protocol (PMP). Hosts use this protocol to join and leave the group.

    Pipe Binding Protocol (PBP). In JXTA, a node accesses a service through a pipe. Using PBP, a node can create a new channel to access the service or work through an existing one.

    Endpoint Routing Protocol (ERP). Using this protocol, a host can forward queries to other hosts' routers to determine routes when sending messages.

Search algorithms in peer-to-peer networks

Since the task of search completeness is relegated to the background in today's oversaturated world, the main task of search in peer-to-peer networks is to quickly and efficiently find the most relevant responses to a query transmitted from a node of the entire network. In particular, the task of reducing the network traffic generated by the request (for example, sending the request to multiple nodes) and at the same time obtaining the best characteristics of the issued documents, i.e. the highest quality result.
It should be noted that, unlike centralized systems, the organization of effective search in peer-to-peer networks is an open research problem.
In most peer-to-peer networks focused on file exchange, two types of entities are used, to which appropriate identifiers (IDs) are assigned: nodes (peer) and resources characterized by keys (key), i.e. the network can be represented by a two-dimensional matrix of dimension MN, where M is the number of nodes, N is the number of resources. In this case, the search task is reduced to finding the ID of the node where the resource key is stored. On the rice. 2. the process of searching for a resource with key 14, launched from node ID0, is presented.
In this case, the search for a resource with key 14 starts from node ID0. The request goes through a certain route and reaches the node where key 14 is located. Next, node ID14 forwards to ID0 the addresses of all nodes that have a resource corresponding to key 14.


Rice. 2. Resource lookup model by key

Let's consider some search algorithms in peer-to-peer networks, limiting ourselves to the main methods of searching by keywords.

Broad Primary Search Method

Rice. 3. Some search models in peer-to-peer networks

Broadth First Search (BFS) is widely used in real P2P file-sharing networks such as Gnutella. BFS method ( rice. 3a) in a P2P network of dimension N is implemented as follows. Node q generates a query that is addressed to all neighbors (nearest, according to some criteria, nodes). When node p receives a request, a lookup is performed in its local index. If some node r receives a Query and processes it, it generates a QueryHit message to return the result. The QueryHit message includes information about relevant documents that is delivered over the network to the requesting host.
When node q receives QueryHits from more than one node, it can download the file from the most available resource. QueryHit messages are returned in the same way as the original query.
In BFS, each request causes excessive network load, as it is transmitted over all links (including nodes with high latency). Therefore, a node with low bandwidth can become a bottleneck. However, there is a method to avoid flooding the entire network with messages. It consists in attributing to each request a lifetime parameter (time-to-level, TTL). The TTL parameter specifies the maximum number of hops over which a request can be forwarded.
In a typical search, the initial value for TTL is usually 5-7, which decreases each time the request is forwarded to the next node. When the TTL becomes 0, the message is no longer transmitted. BFS guarantees a high level of match quality through a large number of messages.

Random Wide Primary Search Method

Random Breadth First Search (RBFS) has been proposed as an improvement on the "naive" BFS approach. In the RBFS method ( rice. 3b) the node q forwards the search prescription only to a part of the network nodes selected in a random order. Which part of the nodes is a parameter of the RBFS method. The advantage of RBFS is that global information about the state of the network's content is not required; the node can get local solutions as fast as it needs to. On the other hand, this method is probabilistic. Therefore, some large network segments may be unreachable.

Intelligent search engine

Intelligent Search Mecha-nism (ISM) is a new method of searching in P2P networks ( rice. 3c). With its help, an improvement in the speed and efficiency of information retrieval is achieved by minimizing the cost of communications, that is, the number of messages transmitted between nodes, and minimizing the number of nodes that are polled for each search request. To achieve this, for each query, only those nodes that best match the query are selected.
The smart search engine consists of two components:
The profile that node q builds for each of its neighboring nodes. The profile contains the latest responses from each host.
Node profile ranking mechanism (relevance rank). The relevance rank is used to select the neighbors that will give the most relevant documents to the query.
The profile mechanism is used to store the latest requests, as well as quantitative characteristics of the search results. When implementing the ISM model, a single request stack is used, in which T requests are stored for q neighboring nodes. As soon as the stack fills up, the node resorts to the "last least used" replacement rule to keep the latest requests.
The ISM method works effectively in networks where the nodes contain some specialized information. In particular, a study of the Gnutella network shows that the quality of the search is very dependent on the “environment” of the node from which the request comes. Another problem with the ISM method is that paging messages can loop, failing to reach certain parts of the network. To solve this problem, a small random subset of nodes is usually selected and added to the set of relevant nodes for each query. As a result, the ISM mechanism began to cover a large part of the network.

Method of "most results by past heuristics"

In the Past Heuristic Most Result (>RES) method, each node forwards the request to a subset of its nodes formed based on some generalized statistic ( rice. 3d).
A query in the >RES method is satisfactory if Z or more results are returned (Z is some constant). In the >RES method, node q forwards queries to the k nodes that produced the highest results for the last m queries. In experiments, k varied from 1 to 10, and in this way the >RES method ranged from BFS to a deep primary search (Depth-first-search) approach.
The >RES method is similar to the ISM method discussed earlier, but uses simpler node information. Its main disadvantage compared to ISM is the lack of analysis of the parameters of the nodes whose content is associated with the request. Therefore, the >RES method is characterized more as a quantitative rather than a qualitative approach. We know from experience that >RES is good in that it routes requests to large network segments (which probably also contain more relevant responses). It also captures neighbors that are less congested, starting with those that typically return more results.

Random walk method

The key idea behind the Random Walkers algorithm (RWA) is that each node randomly forwards a request message, called a "send", to one of its neighboring nodes. To reduce the time it takes to get results, the idea of ​​a single "batch" is extended to "k bursts", where k is the number of independent bursts fired sequentially from the source node.
It is expected that "k sends" after T steps will achieve the same results as one send after kT steps. This algorithm is similar to the RBFS method, but RBFS assumes an exponential increase in messages sent, while the random walk method assumes a linear increase. Both RBFS and RWA methods do not use any explicit rules to direct the search query to the most relevant content.
Another technique similar to RWA is Adaptive Probabilistic Search (APS). In APS, each node expands a local index containing conditional probabilities for each neighbor that can be selected for the next hop for a future query. The main difference from RWA in this case is that in APS the node uses feedback from previous searches instead of completely random transitions.

Examples of file-sharing peer-to-peer networks

P2P file-sharing networks, which currently cover over 150 million nodes, deserve special consideration. Consider the currently most popular peer-to-peer networks such as Bittorrent, Gnutella2 and eDonkey2000.

Net bittorrent(bitstream) was created in 2001. In accordance with the BitTorrent protocol, files are transferred not in whole, but in parts, and each client, downloading these parts, at the same time gives them to other clients, which reduces the load and dependence on each source client and provides data redundancy. In order to initialize a node in the Bittorrent network (www.bittorrent.com), the client program accesses a dedicated server (tracker) that provides information about files available for copying, as well as statistical and routing information about network nodes. The server, even after initialization, “helps” the nodes to interact with each other, although the latest versions of client programs require the presence of a server only at the initialization stage, approaching the ideal of the peer-to-peer concept.
If a node "wants" to publish a file, then the program splits this file into parts and creates a metadata file (torrent file) with information about the parts of the file, their location and the node that will support the distribution of this file.
There are many compatible client programs written for various computer platforms. The most common client programs are Azureus, BitTorrent_client, μTorrent, BitSpirit, BitComet, BitTornado, MLDonkey.

In 2000, one of the first peer-to-peer networks was created Gnutella(www.gnutella.com), whose algorithm has now been improved. Today, a later branch of this network has gained popularity - Gnutella2(www.gnutella2.com), created three years later, in 2003, which implements an open P2P file-sharing protocol used by Shareaza.
In accordance with the Gnutella2 protocol, some nodes become hubs, while the rest are ordinary nodes (leaves). Each regular node has a connection to one or two hubs. Gnutella2 implements information retrieval using the walk method. Under this protocol, a hub has connections to hundreds of nodes and dozens of connections to other hubs. Each node sends to the hub a list of keyword identifiers by which published resources can be found. To improve the quality of the search, the metadata of the files is also used - information about the content, ratings. It is allowed to "reproduce" information about a file on the network without copying the file itself.
For transmitted packets in the network, a proprietary format has been developed that implements the possibility of increasing the functionality of the network by adding additional service information. Queries and lists of keyword IDs in Gnutella2 are sent to hubs over UDP.
The most common programs for Gnutella2 are Shareaza, Kiwi, Alpha, Morpheus, Gnucleus, Adagio Pocket G2, FileScope, iMesh, MLDonkey.

Net EDonkey2000 was also established in 2000. Information about the presence of files in it is published by the client on numerous servers in the form of ed2k links using a unique resource ID. The search for nodes and information in EDonkey2000 is provided by dedicated servers. Currently, there are about 200 servers and about a billion files on the network. The number of EDonkey2000 users is 10 million people.
During operation, each EDon-key2000 client is connected to one of the servers. The client tells the server what files it is sharing. Each server maintains a list of all the shared files of the clients connected to it. When a client searches for something, it sends a search request to its main server. In response, the server checks all the files it knows and returns to the client a list of files that match its request. You can search across multiple servers at once. Such requests and their results are transmitted via the UDP protocol in order to reduce the bandwidth load and the number of connections to the servers. This feature is especially useful if a search on the server to which the client is currently connected returns a low result.
When an EDonkey2000 network client copies a desired resource, it does so simultaneously from multiple sources using the MFTP (Multisource File Transfer Protocol).
Since 2004, the network has been integrated into the EDonkey2000 network. Overnet(www.overnet.com) is a completely decentralized system that allows interaction between nodes without "binding" to servers, for which the Kademlia DHT protocol is used. Such integration of different networks and additional verification contributed to the greater development of the EDonkey2000 network.
The most popular closed-source client program for the EDon-key2000 network is the eDonkey program, but there is also an open-source client - eMule, which, in addition to the EDon-key2000 network, can use another P2P network - Kad Network.

Peer-to-Peer Network Vulnerabilities

It should be recognized that in addition to the above-mentioned advantages of peer-to-peer networks, they also have a number of disadvantages.
The first group of disadvantages is associated with the complexity of managing such networks, compared with client-server systems, if they are used in automated control systems. In the case of a P2P network, significant efforts have to be made to maintain a stable level of its performance, data backup, anti-virus protection, protection against information noise and other malicious user actions.
It should be noted that peer-to-peer networks are subject to virus attacks from time to time, which began in 2002 with the Worm.Kazaa.Benjamin network worm, which spreads through the KaZaA peer-to-peer network.
Another problem of P2P networks is related to the quality and reliability of the content provided. A serious problem is the falsification of files and the distribution of fake resources.
In addition, protecting a distributed network from hacker attacks, viruses and Trojan horses is a very difficult task. Often, information about the participants in P2P networks is stored in an open form, available for interception. A serious problem is also the possibility of falsifying node IDs.
The author considered the model of a hybrid peer-to-peer network with dedicated nodes ( rice. 4) linking individual nodes and maintaining search directories.


Rice. 4. Hybrid peer-to-peer network with dedicated servers

The model assumed that there are N nodes, each of which is logically connected to, on average, n (n<< N) количеством узлов. Для обеспечения поиска существует M поисковых узлов, каждый из которых, в свою очередь, соединен с некоторым количеством узлов; этим узлам он доступен как поисковый каталог. Объемы каталогов распределены в соответствии с экспоненциальным законом, т.е. i-й поисковый каталог соединен с k exp{ai} узлами, где k и a - некоторые константы. Такая закономерность распределения поисковых узлов, действительно, часто наблюдается на практике.
The task of vulnerability analysis was how the information connectivity of a peer-to-peer network would be disrupted if a certain number of leading search directories were disabled.
The obtained calculations confirmed the high information stability of the peer-to-peer network built in accordance with these criteria to the removal of random search nodes. At the same time, the dependence on the removal of the largest nodes is very high, which leads to an exponential decrease in indicators such as the minimum path length between nodes and the clustering coefficient.

Conclusion

Compared to the client-server architecture, P2P has such advantages as self-organization, fault tolerance to loss of communication with network nodes (high survivability), the ability to share resources without being tied to specific addresses, increase the speed of copying information through the use of several sources at once, wide bandwidth , flexible load balancing.
Due to such characteristics as survivability, fault tolerance, and the ability to self-develop, peer-to-peer networks are increasingly used in enterprise and organization management systems (for example, P2P technology is currently used in the US State Department).
There are many areas where P2P technology works successfully, such as parallel programming, data caching, data backup.
Separately, it should be noted the disadvantages inherent in file-sharing public access networks. The biggest problem is the legitimacy of the content transferred in such P2P networks. The unsatisfactory solution of this problem has already led to the scandalous closure of many such networks. It should be noted that, despite numerous lawsuits against peer-to-peer networks, in April this year the European Parliament refused to “criminalize” P2P.
There are other social problems as well. So, in the Gnutella system, for example, 70% of users do not add any files to the network at all. More than half of the resources from this network are provided by one percent of users, i.e. the network is evolving towards a client-server architecture.

Dmitry LANDE , SiB
Doctor of Technical Sciences, Deputy director of IC ElVisti

A cursory examination of the literature reveals many different interpretations of the Peer-to-Peer concept, which differ mainly in the range of features included.

The strictest definitions of a "pure" peer-to-peer network interpret it as a completely distributed system in which all nodes are absolutely equal in terms of functionality and tasks performed. This definition does not fit systems based on the idea of ​​"supernodes" (nodes that act as dynamically assigned local mini-servers), such as Kazaa (although this does not prevent its wide acceptance as a P2P network, or a system that uses some centralized server infrastructure to perform subsets of auxiliary tasks: self-tuning, reputation rating management, etc.).

By a broader definition, P2P is a class of applications that use resources - hard drives, processor cycles, content - available at the edge of the Internet cloud. At the same time, it is also suitable for systems that use centralized servers for their functioning (such as [email protected], instant messaging systems or even the infamous Napster network), as well as various applications from the field of grid computing (lattice computing).

As a result, to be honest, there is no single point of view on what is and what is not a P2P network. The existence of multiple definitions is most likely due to the fact that systems or applications are called P2P not because of their internal operations or architecture, but because they are perceived externally, that is, whether they provide the impression of direct interaction between computers.

At the same time, many agree that the main characteristics of the P2P architecture are the following:

  • sharing of computer resources by direct exchange without the help of intermediaries. Sometimes centralized servers can be used to perform specific tasks (bootstrap, adding new nodes, obtaining global keys for data encryption). Because peer-to-peer nodes cannot rely on a central server to coordinate content sharing and operations throughout the network, they are required to independently and unilaterally take an active role in tasks such as searching for other nodes, localizing or caching content, routing information and messages. , connection with neighboring nodes and its disconnection, encryption and verification of content, etc.;
  • the ability to treat unstable and inconsistent connections as the norm, automatically adapting to disconnections and computer failures, as well as a variable number of nodes.

Based on these requirements, a number of experts offer the following definition (its style is somewhat reminiscent of a patent, but if you try to simplify it, it will only get worse): A P2P network is a distributed system containing interconnected nodes capable of self-organizing into a network topology to share resources such as content, processor cycles, storage devices and bandwidth, adapting to failures and a variable number of nodes while maintaining an acceptable level of connectivity and performance without the need for intermediaries or the support of a global central server.

This is the time to talk about the peculiarities of computing in grid- and P2P-systems. Both represent two approaches to distributed computing using shared resources in a large-scale computing community.

Computing grids are distributed systems that provide for large-scale coordinated use and sharing of geographically distributed resources based on permanent standardized service infrastructures and designed primarily for high performance computing. When expanded, these systems begin to require self-configuration and fault tolerance issues to be addressed. In turn, P2P systems are initially focused on instability, a variable number of nodes in the network, fault tolerance and self-adaptation. By now, P2P developers have created mostly vertically integrated applications and have not bothered to define common protocols and standardized frameworks for interoperability.

However, as P2P technology advances and more complex applications such as structured content distribution, PC collaboration, and network computing are used, P2P and grid computing are expected to converge.

Classification of P2P applications

P2P architectures have been used for many applications in different categories. We give a brief description of some of them.

Communication and collaboration. This category includes systems that provide an infrastructure for direct, usually real-time, communication and collaboration between peer computers. Chat and instant messaging are examples.

Distributed Computing. The goal of these systems is to combine the computing power of peers to solve computationally intensive problems. To do this, the task is divided into a number of small subtasks, which are distributed over different nodes. The result of their work is then returned to the host. Examples of such systems are projects [email protected], [email protected] home and a number of others.

Database systems. Significant effort has gone into the development of distributed databases based on a P2P infrastructure. In particular, a local relational model (Local Relational Model) was proposed, assuming that the set of all data stored in the P2P network consists of incompatible local relational databases (i.e., not satisfying the given integrity constraints) interconnected using “ intermediaries" that define translational rules and semantic dependencies between them.

Content distribution. This category includes most of today's P2P networks, which include systems and infrastructures designed to share digital audiovisual information and other data between users. The spectrum of such content distribution systems ranges from relatively simple direct file sharing applications to more complex ones that create distributed storage environments that provide secure and efficient data organization, indexing, searching, updating and retrieval. Examples include the late Napster network, Gnutella, Kazaa, Freenet, and Groove. In what follows, we will focus on this class of networks.

Distribution of content in P2P networks

In the most typical case, such systems form a distributed storage environment in which users of this network can publish, search and retrieve files. As it becomes more complex, it can introduce non-functional features such as security, anonymity, fairness, scalability, and resource management and organizational capabilities. Modern P2P technologies can be classified as follows.

P2P applications. This category includes content distribution systems based on P2P technology. Depending on the purpose and complexity, it is appropriate to divide them into two subgroups:

  • file sharing systems, designed for simple one-time exchange between computers. In such systems, a network of peers is created and means are provided for searching and transferring files between them. In a typical case, these are “lightweight” applications with “best effort” quality of service, not caring about security, availability and survivability;
  • content publishing and storage systems. Such systems provide a distributed storage environment where users can publish, store, and distribute content while maintaining security and reliability. Access to such content is controlled and hosts must have the appropriate privileges to receive it. The main objectives of such systems are to ensure data security and network survivability, and often their main goal is to create means for identifiability, anonymity, and content management (update, delete, version control).
  • address determination and routing. Any P2P content distribution system relies on a network of peer-to-peer nodes, within which nodes and content must be efficiently localized, and requests and responses must be routed, while ensuring fault tolerance. Various infrastructures and algorithms have been developed to meet these requirements;
  • ensuring anonymity. P2P-based infrastructure systems should be designed to ensure user anonymity;
  • reputation management. P2P networks lack a central authority to manage reputational information about users and their behavior. Therefore, it is located on many different nodes. To ensure that it is secure, up-to-date, and available throughout the network, a sophisticated reputation management infrastructure must be in place.

Localization and routing of distributed objects in P2P networks

The functioning of any P2P content distribution system relies on nodes and connections between them. This network is formed on top of and independently of the underlying one (typically IP) and is therefore often referred to as an overlay. The topology, structure, degree of centralization of the overlay network, localization and routing mechanisms that it uses to transfer messages and content are decisive for the operation of the system, since they affect its fault tolerance, performance, scalability and security. Overlay networks vary in degree of centralization and structure.

Centralization. Although the strictest definition assumes that overlay networks are completely decentralized, in practice this is not always adhered to, and systems with varying degrees of centralization are encountered. In particular, there are three categories:

  • fully decentralized architectures. All nodes in the network perform the same tasks, acting as servers and clients, and there is no central point coordinating their activities;
  • partially centralized architectures. The basis here is the same as in the previous case, but some of the nodes play a more important role, acting as local central indexes for files shared by local nodes. The way in which these supernodes are given their role in the network varies from system to system. It is important to note, however, that these supernodes are not a single point of failure for the P2P network, as they are dynamically assigned and, in the event of a failure, the network automatically transfers their functions to other nodes;
  • hybrid decentralized architectures. Such systems have a central server that facilitates communication between nodes by managing a metadata directory describing the shared files stored on them. Although end-to-end communication and exchange of the latter can be carried out directly between two nodes, central servers facilitate this process by viewing and identifying nodes that store files.

Obviously, in these architectures there is a single point of failure - the central server.

network structure characterizes whether the overlay network is created non-deterministically (ad hoc), as nodes and content are added, or based on special rules. In terms of structure, P2P networks fall into two categories:

  • unstructured. The placement of content (files) in them has nothing to do with the topology of the overlay network; in typical cases, it needs to be localized. Search mechanisms range from brute-force methods, such as flooding requests in breadth-first or depth-first fashion until the desired content is found, to more sophisticated strategies involving using the random walk method and route indexing. Search mechanisms used in unstructured networks have a clear impact on availability, scalability, and reliability.

Unstructured systems are more suitable for networks with a variable number of nodes. Examples are Napster, Gnutella, Kazaa, Edutella and several others;

  • structured. The emergence of such networks was primarily associated with attempts to solve the scalability problems that unstructured systems initially encountered. In structured networks, the overlay topology is tightly controlled, and files (or pointers to them) are placed in strictly defined locations. These systems essentially map content (say, a file ID) to its location (say, a node address) in the form of a distributed routing table, so that requests can be efficiently routed to the node with the content being searched for.

Structured systems (these include Chord, CAN (Content Addressable Network), Tapestry, and a number of others) provide scalable solutions for exact match searches, i.e., for queries in which the exact identifier of the desired data is known. Their disadvantage is the complexity of structure management required for efficient message routing in a variable number of nodes environment.

Networks that occupy an intermediate position between structured and unstructured are called semi-structured. Although they do not fully specify the content localization data, they nevertheless contribute to the search for a route (a typical example of such a network is Freenet).

Now let's discuss overlay networks in more detail in terms of their structure and degree of centralization.

Unstructured architectures

Let's start with fully decentralized architectures(see above definition). The most interesting representative of such networks is Gnutella. Like most P2P systems, it builds a virtual overlay network with its own routing mechanism, allowing its users to share files. There is no centralized coordination of operations in the network, and the nodes are connected to each other directly using software that functions both as a client and as a server (its users are called servernts - from SERVers + cliENTS).

Gnutella uses IP as its underlying network protocol, while communications between nodes are defined by an application layer protocol that supports four types of messages:

  • Ping- a request to a specific host to announce itself;
  • Pong- response to a Ping message containing the IP address, port of the requested host, as well as the number and size of files to be shared;
  • Query- search query. It includes the search string and the minimum speed requirements for the responding host;
  • Query Hits- response to a request Query, includes the IP address, port and baud rate of the responding host, the number of files found, and their index set.

After joining the Gnutella network (by communicating with hosts found in databases such as gnutellahosts.com), the host sends a Ping message to some associated hosts. They reply with a message Pong, identifying themselves, and send out a message Ping to your neighbors.

In a non-structured system like Gnutella, the only way to locate a file was to do a non-deterministic search, since the nodes had no way of guessing where it was.

Initially, the Gnutella architecture used a flood (or broadcast) mechanism to distribute requests Ping and Query: each node forwarded the received messages to all its neighbors, and the responses went the other way. To limit the flow of messages on the network, they all contained a Time-to-Live (TTL) field in the header. On transit nodes, the value of this field was reduced, and when it reached the value 0, the message was deleted.

The described mechanism was implemented by assigning unique IDs to messages and having dynamic routing tables in hosts with message IDs and node addresses. When responses contain the same ID as outgoing messages, the host consults the routing table to determine which channel to send the response to to break the loop.

Rice. 1. An example of a search mechanism in an unstructured system

If a node receives a message Query hit, which indicates that the file being searched is located on a specific computer, it initiates the download through a direct connection between the two nodes. The search mechanism is shown in fig. one.

Partially centralized systems in many ways similar to fully decentralized ones, but they use the concept of supernodes - computers that are dynamically assigned the task of serving a small part of the overlay network by indexing and caching the files contained in it. They are selected automatically based on their processing power and bandwidth.

Supernodes index files shared by their connected nodes and act as proxy servers to perform searches on their behalf. Therefore, all requests are initially directed to the supernodes.

Partially centralized systems have two advantages:

  • reduced search time compared to previous systems with no single point of failure;
  • effective use of the inherent heterogeneity of P2P networks. In fully decentralized systems, all nodes are equally loaded, regardless of their processing power, channel bandwidth, or storage capabilities. In partially centralized systems, supernodes take on most of the network load.

The partially centralized system is the Kazaa network.

Rice. 2 illustrates an example of a typical P2P architecture with hybrid decentralization. Each client computer stores files that are shared with the rest of the overlay network. All clients are connected to a central server that manages tables of registered users (IP address, bandwidth, etc.) and lists of files held by each user and shared over the network along with file metadata (e.g. name, time creations, etc.).

A computer that wants to join a community connects to a central server and tells it about the files it contains. Client nodes send requests about them to the server. It searches the index table and returns a list of users who have them.

The advantage of hybrid decentralized systems is that they are simple to implement and file searches are fast and efficient. The main disadvantages should be considered vulnerability to control, censorship and legal action, attacks and technical failures, since the shared content, or at least its description, is controlled by one organization, company or user. Moreover, such a system does not scale well, since its capabilities are limited by the size of the server's database and its ability to respond to requests. An example in this case is Napster.

Structured architectures

A variety of structured content distribution systems use different mechanisms for routing messages and finding data. We will focus on the most familiar to Ukrainian users - Freenet.

This overlay network belongs to the group of semi-structured ones. Recall that their main characteristic is the ability of nodes to determine (not centrally!), where this or that content is most likely stored on the basis of routing tables, which indicates the correspondence between the content (file ID) and its localization (node ​​address). This gives them the ability to avoid blindly broadcasting requests. Instead, a chaining method is used, according to which each node makes a local decision where to send the message next.

Freenet is a typical example of a fully decentralized semi-structured content distribution system. It functions as a self-organizing peer-to-peer network, pooling unused computer disk space to create a shared virtual file system.

Files in Freenet are identified by unique binary keys. Three types of keys are supported, the simplest of which is based on applying a hash function to a short descriptive text string that accompanies any file stored on the network by its owner.

Each node on the Freenet network manages its own local data store, making it readable and writable by others, as well as a dynamic routing table containing the addresses of other nodes and the files they store. To find a file, the user sends a request containing a key and an indirect lifetime value expressed in terms of the maximum number of hops-to-live passed.

Freenet uses the following message types, each of which includes a host ID (for loop detection), a hops-to-live value, and source and destination IDs:

  • data insert- a node that puts new data into the network (the message contains the key and data (file));
  • data request- request for a specific file (it includes a key);
  • data reply- response when the file is found (the file is included in the message);
  • data failed- an error in the file search (the node and the cause of the error are indicated).

To join Freenet, computers first determine the address of one or more existing nodes and then send messages data insert.

To put a new file on the network, a node first computes its binary key and sends a message data insert yourself. Any node that receives such a message first checks to see if the key is already in use. If not, it looks up the nearest one (in terms of lexicographic distance) in its routing table and sends a message (with data) to the appropriate node. Using the described mechanism, new files are placed on nodes that already own files with similar keys.

This continues until the limit is exhausted. hops-to-live. Thus, the redistributable file will be hosted on more than one node. At the same time, all nodes involved in the process will update their routing tables - this is the mechanism by which new nodes announce their presence in the network. If limit hops-to-live is reached without a key value collision, a "all correct" message will be delivered back to the source, informing it that the file has been placed on the network successfully. If the key is already in use, the node returns the existing file as if it was the one requested. Thus, an attempt to spoof files will cause them to spread further.

When a node receives a request for a file stored on it, the search stops and the data is sent to its initiator. If the required file does not exist, the node sends a request to the neighbor, on which it can most likely be located. The address is looked up in the routing table by the nearest key, and so on. Note that this is a simplified search algorithm that gives only a general picture of the functioning of the Freenet network.

On this we, perhaps, will finish a very concise review of P2P technologies and touch on the topic of using them in business. It is easy to highlight a number of advantages of the P2P architecture over the client-server architecture that will be in demand in a business environment:

  • high reliability and availability of applications in decentralized systems, due to the absence of a single point of failure and the distributed nature of information storage;
  • better utilization of resources, whether it be communication bandwidth, processor cycles or hard drive space. Duplication of work information also significantly reduces (but does not completely eliminate) the need for backup;
  • ease of deployment of the system and ease of use due to the fact that the same software modules perform both client and server functions - especially when it comes to working in a local network.

The potential of P2P networks proved so great that Hewlett-Packard, IBM, and Intel initiated a working group to standardize the technology for commercial use. The new version of Microsoft Windows Vista will include built-in collaboration tools that allow laptops to share data with their nearest neighbors.

Early adopters of the technology, such as aerospace giant Boeing, oil company Amerada Hess and Intel itself, say it reduces the need to purchase high-end computing systems, including mainframes. P2P systems can also ease network bandwidth requirements, which is important for companies that have problems with this.

Intel began using P2P technology in 1990 in an effort to reduce chip development costs. The company has created its own system, called NetBatch, that connects more than 10,000 computers, giving engineers access to globally distributed computing resources.

Boeing uses distributed computing to perform resource-intensive test cases. The company uses a Napster-like network model in which servers route traffic to designated nodes. "There is no single computer that meets our requirements," says Ken Neves, director of research.

The potential of P2P technologies has attracted the attention of venture capital as well. For example, Softbank Venture Capital invested $13 million in United Device, which develops technologies for three markets: biotech computing, quality of service (QoS) and load testing for Web sites, and content indexing based on the worm search method. used by a number of machines on the Internet.

In any case, five areas of successful P2P networks are already evident today. These are file sharing, application sharing, system integrity, distributed computing, and device interoperability. There is no doubt that in the near future there will be even more of them.

And while we sit here and think about where to place our ads, something strange is happening in Palo Alto. There, the employees of a small Hassett Ace Hardware store that sells household equipment show how the ancient wisdom that “people are not created for accumulation, but for exchange” can come to life.

It's called "Repair Cafe". Every weekend, a platform opens near the store, where anyone can repair anything for free. But at the same time, he will have to contribute to what is happening on this site. While the manager of the store is doing the usual sales, five other employees organize crowds of people who want to "repair" people, involving them in other repairs.

Everyone shares knowledge, advice and good mood. Sales are going uphill (repairs often require parts that you need to buy in a store). Around 130 "techniques" were repaired by the county in April, including a giant garden lava fountain and a 200-year-old sewing machine. Everyone who has repaired at the Hassett Ace Hardware site receives a bicycle flag with the company logo. And they take it with pleasure, because great service is a damn pleasant and unforgettable thing.

Such an economy of mutual benefit has received the name peer-to-peer or "equal to equal" in the marketing lobby. It is built not only on money, but also on a high degree of emotional satisfaction, and in the case of small stores like Hassett Ace Hardware, also on building almost intimate relationships with customers. Rumor has it that the technology is already being sniffed around by giants such as Pepsi, Chevrolet and Unilever.

“We learned an interesting thing: before coming to the dealership for a car, young buyers look for the pages of our sellers in social networks in order to study their interests and find a person who is close in spirit. They find him and consult with him, because they know that the help will be more friendly than managerial, ”says Christy Landy, marketing manager at General Motors. Even expert opinion can be subject to a mutually beneficial exchange.

If you regularly use the Internet, most likely you have heard about the terms peer-to-peer network, decentralized network, or peer-to-peer network, peer-to-peer or its abbreviation - P2P network. All these terms mean the same thing. If you want to know what peer-to-peer is and what it is used for, you should read this article.

What is P2P or peer-to-peer network?

Peer-to-peer, or P2P network for short, is a type of computer network that uses a distributed architecture. This means that all computers or devices that are part of it share workloads on the network. Computers or devices that are part of a peer-to-peer network are called peers. Each node in a peer-to-peer network, or peer, is equal to other peers. There are no privileged members, just as there is no central administrative device. Thus, the network is decentralized.

In a way, peer-to-peer networks are socialist networks in the digital world. Each participant is equal to the others, and each has the same rights and obligations as the others. Peers are both clients and servers at the same time.

In addition, each resource available in a peer-to-peer network is shared by all nodes without the participation of a central server. Shared resources in a P2P network can be:

  • Processing power
  • Disk space
  • Network bandwidth

What do P2P (peer-to-peer) networks do?

The main purpose of peer-to-peer networks is to share resources and work together for computers and devices to provide a specific service or perform a specific task. As mentioned earlier, a decentralized web is used to share all kinds of computing resources such as processing power, network bandwidth, or disk space. However, the most common use case for peer-to-peer networks is to share files online. Peer-to-peer networks are ideal for file sharing because they allow computers connected to them to receive and send files at the same time.

Consider the situation: you open your web browser and visit a website where you download a file. In this case, the site acts as a server and your computer acts as a client that receives the file. You can compare it to a one-way road: the downloaded file is a car that goes from point A (website) to point B (your computer).

If you download the same file through peer-to-peer network using the BitTorrent site as a starting point, the download is performed differently. The file is downloaded to your computer in chunks that come from many other computers that already have the file on the P2P network. At the same time, the file is also sent (downloaded) from your computer to others who request it. This situation is like a two-way road: the file is like a few small cars that come to your computer, but are also sent to other users when they request it.

Why are peer-to-peer networks useful?

P2P networks have several features that make them useful:

  • It is difficult to “drop” them, that is, to bring them out of working condition. Even if you disable one peer, others continue to work and interact. For the network to stop working, you must close all peers.
  • Peer-to-peer networks are extremely scalable. New peers are easy to add because you don't need to change the configuration on the central server.
  • When it comes to file sharing, the larger the peer-to-peer network, the faster it goes. Having the same file stored on many peers in a decentralized network means that when someone needs to download it, the file is downloaded from many places at the same time.

Why do we need peer-to-peer networks? Legal use of P2P networks

Peer-to-peer networks are needed to connect computers and devices to a single network without the need to configure a server. When building a server, it is expensive and difficult to maintain, and people use cheaper alternatives like P2P. Here are some common examples of using P2P networks:

  • When you connect a Windows device in your home to a home group of computers, you create a peer-to-peer network between them. A homegroup is a small group of computers that are linked together to share disk space and printers. This is one of the most common uses for peer-to-peer technology. Some people might say that homegroups cannot be peer-to-peer because the computers on the network are connected to a router. However, keep in mind that the router has nothing to do with network management. The router does not act as a server, but simply as an interface or link between the local network and the Internet.
  • When you create a network between two computers, you create a peer-to-peer network.
  • Sharing large files on the Internet is often done using a P2P network architecture. For example, some online gaming platforms use the P2P network to download games between users. Blizzard Entertainment distributes Diablo III, StarCraft II and World of Warcraft using P2P. Another major publisher, Wargaming, is doing the same with its World of Tanks, World of Warships, and World of Warplanes games. Others, such as Steam or GOG, prefer not to use P2P, but maintain dedicated servers around the world.
  • Windows 10 updates are delivered both from Microsoft servers and through the P2P network.
  • Many Linux operating systems are distributed via BitTorrent, which use peer-to-peer networks. Examples are Ubuntu, Linux Mint and Manjaro.
  • Finally, blockchain technology uses peer-to-peer decentralized networks to record information in a distributed ledger on all computers in the network at the same time. (Read more in the articles " What is blockchain in simple words?" and " What is a distributed ledger? »)

Peer-to-peer networks are the cheapest way to distribute content because they use peer-to-peer bandwidth rather than content creator bandwidth.

History of P2P networks

The forerunner of peer-to-peer networks is USENET, which was developed in 1979. It was a system that allowed users to read and post messages/news. It was a network similar to today's online forums, but with the difference that USENET did not rely on a central server or administrator. USENET copied the same message/news to all servers found on the network. Similarly, decentralized networks distribute and use all the resources available to them.

The next big milestone in the history of peer-to-peer networks was 1999, when Napster was born. Napster was a file sharing software that people used to distribute and download music. Music distributed through Napster was usually copyrighted and thus illegal to distribute. However, that hasn't stopped people from using it.

Although Napster was the one who brought P2P into the mainstream, the project ultimately failed and was shut down by the authorities due to illegal distribution of content.

It can also be said with confidence that a new step in the development of peer-to-peer networks was the formation of the blockchain industry in 2008, along with the advent of bitcoin. The use of peer-to-peer decentralized networks is one of the three main components of blockchain technology, along with a common ledger of records and a consensus mechanism.

Currently, P2P remains one of the most popular technologies for sharing files over the Internet, both legally and illegally.

Illegal use of peer-to-peer networks

P2P is a controversial technology because it is widely used for piracy. Due to the benefits of this technology, there are many websites on the Internet that offer access to copyrighted content such as movies, music, software or games through P2P networks. While the technology itself is not illegal and has many legal uses that do not involve piracy, the way some people use P2P is illegal.

Therefore, when using a peer-to-peer network, make sure you are not engaging in piracy or other use cases that are punishable by law.

© 2022 hecc.ru - Computer technology news