IPFS与HTTP:如何推进互联网再次进化

  • 0_1547448421359_Filecoin灏忓浘_鐢绘澘 1 鍓湰 20.jpg


    星际文件系统(IPFS)是在分布式网络上运行的协议。IPFS允许以显着改进广泛使用的超文本传输协议(HTTP)的方式,存储和共享文件。

    --

    在本文中,我们将研究使用HTTP链接网站和资源的传统方法,与IPFS提供的方法之间的区别。功能和实用性的好处是巨大的。通过这种比较,我们阐明了通过分散交换(DEX)或非监管钱包,在分布式网络(如比特币)中运行与传统银行系统等集中式系统之间的差异。

    --

    为全球互联网铺平道路的HTTP协议,是由Tim Berners-Leep于1989年至1991年间,在瑞士的欧洲核研究组织(CERN)工作期间开发的。HTTP允许人们使用静态地址访问网站或文件例如:https://mywebsite.com/myfile.pdf , 而不是服务器的IP地址。如果我们将其与现有加密货币进行比较,则类似于使用比特币的别名而不是长64字符钱包地址。见图片以供参考。

    0_1547448473654_6b3b3c60-b26d-4a5e-9a13-9aba255b785a-image.png

    --

    尽管HTTP一直是推动互联网向全球扩展的催化剂,但它确实有其缺点。


    现有网络的问题:集中化

    --
    让我们使用一个常见的场景进行说明,你所在的网站上有PDF文档的链接。大多数情况下,该文件位于单个Web服务器上。当你或任何其他用户单击该链接时,你将被定向到一个文件。

    --

    最终用户不知道文件是否已被篡改或修改。如果文件被移动或删除,该文件也可能无法访问 - 每个人都在某个阶段以“404”错误的形式体验过。此外,主机可以决定关闭服务器或限制其访问。简而言之,你有一个集中的单点故障来源。

    --

    2014年6月,有人发表了一篇论文,给出了不同的观点。该报告“探讨了linkrot在学术和法律引用中的普遍性”,并在此过程中发现:

    • “......哈佛法律评论和其他期刊中超过70%的网址,以及美国最高法院意见中50%的网址,都没有链接到最初引用的信息。”

    --

    虽然此示例与美国有关,但你可以想象在全球范围内可能出现的问题,即更新、修改或删除网站/域名。值得强调的是,华盛顿邮报上的一篇文章称“今天网页的平均寿命为100天”。

    --

    集中式网络或系统 - 如:银行和数据中心 - 必须通过信任、依赖和授权中间人来引入风险。他们容易遭受监视、黑客攻击和监管。这就是为什么像Facebook和谷歌这样能够负担得起的大公司,试图通过在全球建立自己的数据中心,来减轻一些风险。正如TechCrunch在以当前形式讨论互联网时所提到的:

    • “......一个缓慢,昂贵的互联网,掠夺性的最后一英里航空公司(至少在美国)的成本更高......它不仅缓慢而且昂贵,而且不可靠。如果HTTP传输中的一个链接因任何原因而中断,则整个传输中断...“

    IPFS简介:分布式网络如何使互联网再次分散

    0_1547448543014_62a719b8-6f60-4689-b2e9-fbf3bfe79b56-image.png

    --
    IPFS通过消除对数据中心和集中式服务器的需求,改变了我们与网站和在线资源(如文件和媒体)交互的方式。您可以使用其唯一指纹(例如QmdWhyocdN9kNVnH37qjuyLDjBcRX5W45guqsJxDFg6TLv),而不是传统的集中式原始URL格式(mywebsite.com/myfile.pdf)来搜索和访问内容。

    --

    IPFS协议允许你从物理上最接近的节点访问内容。这样做可以节省带宽,提高下载速度,同时消除重复的可能性。这是一个完全分布式的网络,类似于比特币和许多其他分散的加密货币。TechCrunch 评论:

    • “我们使用内容寻址,因此内容可以与原始服务器分离,而是可以永久存储。这意味着内容可以非常靠近用户存储和提供,甚至可能来自同一房间的计算机。内容寻址也允许我们验证数据,因为其他主机可能不受信任。“

    --

    IPFS和HTTP可以直接与法定货币 - 美元、欧元 - 与加密货币(如比特币)之间的差异进行比较。使用比特币,只要你拥有私钥,只要你有互联网连接,就可以从地球上任何地方的、任何非监管钱包中获取你的Token。该网络使用个人私钥授予访问帐户的权限。通过法定货币,依赖银行为您提供访问权限。如果你丢失了银行卡,则必须等待银行通过向你发送另一张银行卡,给你访问权限。此外,银行是集中的,可以通过监管直接入侵或控制。在比特币网络中,你基本上需要关闭互联网以停止网络。

    --

    集中、封闭的系统阻碍了进步和创新。它们可以引入分散的开放系统中不存在的风险。分布式网络 - 例如IPFS和比特币 - 使我们能够在不再需要中介的情况下克服这些风险。接受这种创新是很自然的,因为它符合我们的组织和交换的集体方法。集权化对我们社会造成的破坏 - 我们彼此联系的方式 - 超过了它可能带来的好处。我们为什么要相信一个在我们不再需要时一次又一次失败的系统?当我们可以选择退出时。


    【往期文章】

    IPFS最新动态-24

    为什么数字Token是新互联网的基础?

    基于IPFS的分布式搜索引擎模式


    识别二维码进入IPFS社群

    0_1547448753288_ebce6da4-dee6-4641-9a3d-2568d1ca684a-image.png

Log in to reply
 

Email:filapp@protonmail.com

Twitter:

https://twitter.com/RalapXStartUp

Telegram:

https://t.me/bigdog403

全球Filecoin中文爱好者网站

  • X

    It has come to my attention that storage clients wish to obtain the CommD and CommR associated with the sector into which the piece referenced in their storage deal has been sealed. The client can already use the query-deal command to obtain the CID of the commitSector message corresponding to that sector - but message wait doesn't show individual commitSector arguments - it just shows some string encoding of the concatenated arguments' bytes.
    I propose to augment the ProofInfo struct with CommR and CommD such that the storage client can query for their deal and, when available, see the replica and data commitments in the query-storage-deal command. Alternatively, the query-storage-deal code could get deal state from the miner and use the CommitmentMessage CID to look up CommR and CommD (on chain) - but this seems like more work than is really necessary.

    阅读更多
  • X

    Hmmmm. These are just my thoughts off the cuff:

    For straight-ahead performance that's not specifically concerned with the issue of loading the data (from wherever), then work on smaller (1GiB) sectors is okay. When considering optimization for space so that large sectors can be replicated, 128GB for >1GiB sectors is obviously problematic from a normal replication perspective. However, if we consider the attacker who wants to replicate fast at any cost, then maybe it's okay.
    Based on this, we could probably focus on smaller sectors as a reasonable representation of the problem. This has the unfortunate consequence that the work is less applicable to the related problem of speeding replication even when memory must be conserved to some extent.
    I guess as a single datum to help calibrate our understanding of how R2 scales, it would be worth knowing exactly how much RAM is required for both 1GiB and (I guess) 2GiB. If the latter really fails with 128GB RAM, how much does it require not to? If the work you're already doing makes it easy to get this information, it might help us reason through this. I don't think you should spend much time or go out of your way to perform this investigation though, otherwise.
    Others may feel differently about any of this.

    阅读更多
  • X

    @xiedapao
    If there does exist such a thing, I cannot find it.

    zenground0 [7 hours ago]
    I don't believe there is

    zenground0 [7 hours ago]
    tho maybe phritz has some "refactor the vm" issues that are relevant

    laser [7 hours ago]
    I assert to you that we must create an InitActor in order for CreateStorageMiner conform to the specification.

    Why [7 hours ago]
    I’ll take things I don’t disagree with for $400 Alex

    zenground0 [7 hours ago]
    Agreement all around. However this is one of those changes that is pretty orthogonal to getting the storage protocol to work and something we can already do. We need to track it but I see it as a secondary priority to (for example) getting faults or arbitrating deals working.

    anorth [3 hours ago]
    Thanks @zenground0, I concur. Init actor is in our high-level backlog, but I'm not surprised there's no issue yet. Is it reasonable to draw our boundary there for now?

    阅读更多
  • X

    Does there already exist a story which addresses the need to create an InitActor?

    阅读更多

Looks like your connection to Filecoin.cn中国爱好者社区 was lost, please wait while we try to reconnect.