The above portion shows the size of regular web ,whereas the lower portion of ice berg shows the size of deep web
so some of you are already aware of but whose who are not this post is for them ,”deep-web ” means the web which is not visible to us because it is hidden, hidden because the browsers we use are not supported to those sites,and content of those sites are not indexed by standard search engines
The deep web includes many very common uses such as web mail, online banking but also paid for services with a paywall such as video on demand, and many more.
Computer scientist Mike Bergman is credited with coining the term deep web in 2000 as a search indexing term.
The first conflation of the terms “deep web” and “dark web” came about in 2009 when the deep web search terminology was discussed alongside illegal activities taking place on the Freenet ,darknet.
Since then, the use in the Silk Road’s media reporting, many people and media outlets, have taken to using Deep Web synonymously with the dark web or darknet, a comparison Bright Planet rejects as inaccurate and consequently is an ongoing source of confusion. Wired reporters Kim Zetter and Andy Greenberg recommend the terms be used in distinct fashions
these sites use special type of address ,and they are changed on regular bases so in this manner government cannot track them or close them.there address is temporary.you can access these types of sites by downloading tor browser.it doesn’t track you or let other third parties to track you,you can download it from https://www.torproject.org/projects/torbrowser.html.en
size of deep web vs regular web:
n the year 2000, Michael K. Bergman said how searching on the Internet can be compared to dragging a net across the surface of the ocean: a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed. Most of the web’s information is buried far down on sites, and standard search engines do not find it. Traditional search engines cannot see or retrieve content in the deep web. The portion of the web that is indexed by standard search engines is known as the surface web. As of 2001, the deep web was several orders of magnitude larger than the surface web. An analogy of an iceberg used by Denis Shestakov represents the division between surface web and deep web respectively:
It is impossible to measure, and harsh to put estimates on, the size of the deep web because the majority of the information is hidden or locked inside databases. Early estimates suggested that the deep web is 400 to 550 times larger than the surface web. However, since more information and sites are always being added, it can be assumed that the deep web is growing exponentially at a rate that cannot be quantified.
Estimates based on extrapolations from a study done at University of California, Berkeley in 2001 .speculate that the deep web consists of about 7.5 petabytes. More accurate estimates are available for the number of resources in the deep web: research of He et al. detected around 300,000 deep web sites in the entire web in 2004, and, according to Shestakov, around 14,000 deep web sites existed in the Russian part of the Web in 2006.