Cold and warm storage in the hottest large-scale d

  • Detail

The "cold and warm" storage in large-scale data centers is easy to have Tai Chi, which produces Liangyi, Liangyi and Sixiang, and Sixiang and Bagua. Everything in the world comes from this. The ancient book of changes uses this passage to explain the process of the origin of all things, and how do we know it? Because of the preservation and circulation of data, data information can be regarded as the main carrier of civilization communication, and the use of storage media is the main tool of data information communication. From tying ropes to record events, to oracle bones, bronze tripods, and then to the use of paper, through continuous transformation and invention, people have spread civilization from generation to generation, and also improved the process of human development. Here is a counterexample to show the importance of the selection of storage media. As one of the four ancient civilizations, India once had a relatively serious cultural fault. According to archaeological research, they used bark or leaves to record information. Up to now, the amount of survival is very rare. Some scholars pointed out that this is the most fundamental reason for the cultural fault

with the occurrence of the industrial revolution and the rapid change of science and technology, the storage medium people choose has also changed greatly. Hard disk has become a necessary product in people's daily life and work, and plays an important role in the data storage of concrete pressure testing machine after these years of development. However, in recent years, due to the continuous popularity of social networking sites and intelligent mobile terminals, the amount of data created by people is amazing. According to statistics, the total amount of data generated in China alone in 2013 has exceeded 0.8zb (equivalent to 800million TB). Under the pressure of massive data, what will happen to the storage medium of data? In the face of massive data, how can enterprises transform and upgrade traditional data centers to meet the needs of business operations? How can emerging Internet giant companies solve the problem of accessing their huge data if their ABS has been tried and certified by automobile manufacturers? These urgent problems are to use traditional means, such as unlimited expansion of the size of the data center, expansion of storage capacity, or to adopt new scientific and technological means and choose new storage media to meet the access needs of massive data

data trade-offs under big data

to solve the problems mentioned above, we should first understand the composition characteristics of massive data at this stage. In the past, data storage was basically based on structured data. These data have certain regularity. Through a simple mechanism, we can easily store data and obtain relevant information at any time. However, with the popularity of social networks and intelligent devices, anyone can produce a large amount of data, which is chaotic and irregular. These data are also known as unstructured data. Especially in the Internet industry, the videos, words and pictures created by users on their platforms are lack of certain information annotation, which makes it difficult for early data centers to distinguish data types and adopt a direct storage method for data information, which greatly reduces the availability of data and causes a certain amount of cost waste

in the era of big data, people begin to re-examine the availability and importance of data. What is the proportion of massive data composed of structured and unstructured data to the burden of it construction costs of enterprises? Can the availability of these data gradually reduce the input-output ratio for enterprises? If we want to solve these problems, we should first start with the media that stores its data

disadvantages of traditional mechanical hard disk storage

we know that hard disk has been bearing the burden of data storage since it was built by the blue giant IBM. In the enterprise IT infrastructure, hard disk is the brick to build the overall IT architecture. Hard disk manufacturers have formed an oligarchy era through technological changes and noteworthy acquisitions among manufacturers. For example, Hitachi global storage, which bought IBM's hard disk business, was acquired by Western thermoplastic composite manufacturing process with high efficiency data, renamed hgst, and Western data therefore holds the leading position in hard disk; Seagate, founded by former employees of IBM, has also ranked second in the hard disk market after acquiring the hard disk businesses of Maxtor and Samsung. The fierce competition in the hard disk market is evident. Because of the high demand of users, the prosperity of the market can be created. If the share is large, more people will be grabbed and the means will be fierce

what changes have taken place in the era of big data and cloud computing in this high demand for hard disks? Because of the limitation of the single disk capacity of traditional mechanical hard disk, most users can only take the means of continuous capacity expansion to meet the needs of data access in the face of massive data. The continuous expansion will inevitably lead to the increase of equipment energy consumption and the waste of space resources, which also makes the enterprise fall into a bottomless hole of capacity and insufficient money. In addition, because of the limitations of the speed, performance and high reliability of the traditional mechanical hard disk, users will increase the time cost when accessing large capacity data. Coupled with the diversity of unstructured data, which now dominates the mainstream, users are no longer satisfied with simple data storage, but need to find data value in the process of thousands of data access, which has very high requirements for the performance of a hard disk

choice of flash memory

traditional mechanical hard disk can't meet the new needs of users, and the development of technology can't quickly follow the changing speed of users' needs, so people began to look for other types of storage media, and flash memory was proposed. As the name suggests, flash memory is a storage medium that is very dominant in terms of access speed. Flash memory has been around for a long time, but people always distrust it. In addition to its high price, there is also traditional recognition. The storage medium of small chips is really difficult for traditional enterprises to trust. It is still a large disc that makes people comfortable and worry free. So far, many financial institutions still dare not use flash memory as the main storage medium for high-performance computing, also out of concern about its instability

but in terms of performance, flash memory can definitely meet various needs in the era of big data. The access speed of massive data has been dozens of times higher than that of traditional mechanical hard disk. The small and precise volume saves a lot of space in the traditional data center, saves energy consumption, and saves a lot of money for the data center. Of course, the premise of saving a lot of money is that these flash memories are given to you by others. Therefore, the application of flash memory is still unable to get out of the expensive and unstable image of being rich and handsome. Up to now, the two hard disk giants have not developed much solid-state disk business, which may also explain the general direction of market choice

Copyright © 2011 JIN SHI