全球最大盗版资源网站“海盗湾” 上线IPFS!

  • 0_1548151363891_3b608fcc-e5e1-40bc-aded-ac45aa0adeea-image.png


    海盗湾(英语:The Pirate Bay,缩写:TPB)是一个专门存储、分类及搜索Bittorrent种子文件的网站,并且号称是“世界最大的BitTorrent tracker(BT种子服务器)”,提供的BT种子除了有自由版权的收集外,也有不少被著作人声称拥有版权的音频、视频、应用软件与电子游戏等,为网络分享与下载的重要网站之一。


    BitTorrent协议具有分散性,但围绕它的生态系统有一些弱点。例如:使用集中式搜索引擎Torrent网站,容易出现中断和删除的问题。Torrent-Paradise用IPFS解决了这个问题,IPFS是一个由人们共享的种子搜索引擎。

    --

    IPFS是星际文件系统的缩写,已经存在了几年。虽然这个名字对大多数人来说听起来很陌生,但它在技术人员中被使用的越来越多。

    --

    简而言之,IPFS是一个分散的网络,用户可以在其中相互提供文件。如果一个网站使用IPFS,它就会被一群人服务,就像BitTorrent用户在共享文件时所做的那样。该系统的优点是网站可以完全分散。如果使用IPFS托管网站或其他资源,只要有一个“pinned(固定)”它的用户计算机保持在线,它仍然可访问。

    --
    0_1548151471284_0d96f836-bb66-4136-9841-9773ccc1e28f-image.png

    --
    IPFS的优势很明显。它允许档案管理员、内容创建者、研究人员和许多其他人,通过Internet分发大量数据。它具有审查能力,不容易受到常规托管中断的影响。它也是“海盗”网站的完美搭档。分散的性质使得IPFS站点几乎不可能关闭。Pirate Bay的联合创始人Peter Sunde早在2016 年就突出了这一方面。

    Sunde说:“IPFS非常好,如果每个人都开始使用它,那就太棒了。它可以完美地减少集中化。问题是像TPB和KAT这样的大型网站并不擅长使用新技术“。

    在Sunde发表评论后不久,KAT被关闭,而海盗湾仍在线,现在它的停机时间比以往任何时候都多。尽管如此,到目前为止,没有一个主要的海盗网站对IPFS表现出兴趣。

    --

    还有其他人接受了这个挑战。一位名叫Urban Guacamole的开发商最近推出了Torrent-Paradise,一个由IPFS驱动的种子搜索引擎。

    他告诉TF:“我觉得分散搜索是种子生态系统演变的下一步趋势。文件共享继续朝着越来越多的分散化的方向发展,消除了一个接一个的单点故障“。

    要启动该站点,Torrent-Paradise使用了The Pirate Bay数据库的副本。在ipfsearch.xyz的帮助下,它被转换为可搜索的索引,该网站的运营商拥有DHT爬虫,目前每天增加大约20,000个新的种子。

    --

    这听起来都很积极,但也有一些缺点:

    其中一个主要障碍是,如果要成为节点,必须安装和配置IPFS。这是一个相对简单的过程,但普通Web用户可能不熟悉使用命令行进行设置,这是一项要求。但是,也有IPFS网关可用。例如,Cloudflare最近推出了一款。这允许任何人通过自定义URL访问Torrent-Paradise等网站,但这些人无法共享该网站。

    另一个缺点是网站所依赖的静态索引每天只更新一次。这不是技术限制,而是更实际的限制。从理论上讲,它可以近乎实时地更新。

    --
    0_1548151547174_8a30b2b1-867d-44ef-9f6d-649e8e994ecc-image.png
    (海盗湾网站BT种子搜索)

    --

    目前,有一个普通的 Torrent-Paradise网站,所有人都可以访问,以及一个无广告的IPFS版本。该网站本身是相当基本的,但它的真正意义在于展示权力下放的力量。几十年来,文件共享的分散化一直在持续。例如,BitTorrent协议是分散的。海盗湾通过移除其跟踪器和种子,依靠DHT和磁铁链接进一步推动了这一进程。

    --

    Urban Guacamole说:“下一步分散种子搜索”,他认为未来IPFS可能会在种子网站中变得更加普遍。

    --

    Torrent-Paradise的运营商将“可用性”视为主要优势之一。在这种情况下,这与审查抵制密切相关。

    “因为Torrent Paradise的每次更新都是IPFS哈希,所以包括我在内的任何人都不可能取消该网站。只要有人固定它(IPFS相当于播种),该网站就可以使用。“

    由于该网站最初是海盗湾副本,因此所有权持有者最终可能会受到投诉。虽然该站点将遵守DMCA通知,但它无法控制已在网络中共享的哈希值。目前,Urban Guacamole计划继续在该网站上开展工作。凭借免费域名和Cloudflare支持,每月只需花费大约4美元,因此成本不是一个因素。

    也许是海盗湾要考虑的事情?

    “当服务器停机时,它肯定会帮助他们保持网站可用,”Urban Guacamole说。

    --

    (备注:添加微信ipfsxiaomishu,可获取”海盗湾“BT种子搜索网站)


    【往期文章】

    -IPFS最新动态-25

    -IPFS的商业托管服务

    -2019年五大最期待的区块链项目


    创意小视频:矿机漂流记-之饭店篇


    识别二维码进入IPFS社群

    0_1548151893521_6a2c94b8-4d8b-4a7a-a171-18a90ee82ee3-image.png

Log in to reply
 

Email:filapp@protonmail.com

Twitter:

https://twitter.com/RalapXStartUp

Telegram:

https://t.me/bigdog403

全球Filecoin中文爱好者网站

  • X

    It has come to my attention that storage clients wish to obtain the CommD and CommR associated with the sector into which the piece referenced in their storage deal has been sealed. The client can already use the query-deal command to obtain the CID of the commitSector message corresponding to that sector - but message wait doesn't show individual commitSector arguments - it just shows some string encoding of the concatenated arguments' bytes.
    I propose to augment the ProofInfo struct with CommR and CommD such that the storage client can query for their deal and, when available, see the replica and data commitments in the query-storage-deal command. Alternatively, the query-storage-deal code could get deal state from the miner and use the CommitmentMessage CID to look up CommR and CommD (on chain) - but this seems like more work than is really necessary.

    阅读更多
  • X

    Hmmmm. These are just my thoughts off the cuff:

    For straight-ahead performance that's not specifically concerned with the issue of loading the data (from wherever), then work on smaller (1GiB) sectors is okay. When considering optimization for space so that large sectors can be replicated, 128GB for >1GiB sectors is obviously problematic from a normal replication perspective. However, if we consider the attacker who wants to replicate fast at any cost, then maybe it's okay.
    Based on this, we could probably focus on smaller sectors as a reasonable representation of the problem. This has the unfortunate consequence that the work is less applicable to the related problem of speeding replication even when memory must be conserved to some extent.
    I guess as a single datum to help calibrate our understanding of how R2 scales, it would be worth knowing exactly how much RAM is required for both 1GiB and (I guess) 2GiB. If the latter really fails with 128GB RAM, how much does it require not to? If the work you're already doing makes it easy to get this information, it might help us reason through this. I don't think you should spend much time or go out of your way to perform this investigation though, otherwise.
    Others may feel differently about any of this.

    阅读更多
  • X

    @xiedapao
    If there does exist such a thing, I cannot find it.

    zenground0 [7 hours ago]
    I don't believe there is

    zenground0 [7 hours ago]
    tho maybe phritz has some "refactor the vm" issues that are relevant

    laser [7 hours ago]
    I assert to you that we must create an InitActor in order for CreateStorageMiner conform to the specification.

    Why [7 hours ago]
    I’ll take things I don’t disagree with for $400 Alex

    zenground0 [7 hours ago]
    Agreement all around. However this is one of those changes that is pretty orthogonal to getting the storage protocol to work and something we can already do. We need to track it but I see it as a secondary priority to (for example) getting faults or arbitrating deals working.

    anorth [3 hours ago]
    Thanks @zenground0, I concur. Init actor is in our high-level backlog, but I'm not surprised there's no issue yet. Is it reasonable to draw our boundary there for now?

    阅读更多
  • X

    Does there already exist a story which addresses the need to create an InitActor?

    阅读更多

Looks like your connection to Filecoin.cn中国爱好者社区 was lost, please wait while we try to reconnect.