Introduction

The technology industry and financial markets constantly ask: “Is an AI bubble about to burst?" This concern typically stems from historical vigilance about market frenzy and excessive investment—from the 2000 dot-com bubble to the 2008 financial crisis, investors remain highly sensitive to “the next crash." However, as this article argues, the essential risk of AI is not the “burst" of a bubble caused by waning enthusiasm, but rather a structural transition as the technological center of gravity shifts from one paradigm to another.

In the AI field, the focal point of this transition is moving from “pursuing model scale" toward “pursuing efficiency, control, and personalization." Companies and developers who focus solely on scale competition may become casualties left behind in this paradigm shift. This article will explore this ongoing paradigm shift from the perspectives of human needs, historical analogies, and computational architecture, attempting to provide a theoretical framework for understanding AI’s future development direction.

引言

當前的科技界與金融市場不斷追問:「AI 泡沫是否即將來臨?」這種擔憂通常源於對市場狂熱與過度投資的歷史性警惕——從 2000 年的網路泡沫到 2008 年的金融危機,投資人對於「下一個崩盤」始終保持高度敏感。然而,正如本文所論,AI 的風險本質並非「熱度消失」的泡沫破裂,而是技術重心從一個典範(Paradigm)遷往另一個典範的結構性轉移

在 AI 領域,這場轉移的中心焦點正在從「追求模型規模」轉向「追求效率、掌控與個人化」。企業與開發者若將目光鎖定在單純的規模競爭,將可能成為這場新典範遷移中的遺落者。本文將從人性需求、歷史類比、計算架構—深入探討這場正在發生的典範轉移,並試著提供理解 AI 未來發展方向的理論框架。

I. AI Is Not a Demand Bubble, But an Inevitable Resource Reallocation

1.1 Theoretical Context: From Human Needs to Economic Growth

From historical and psychological perspectives, the driving force behind the AI wave has never been mere market hype, but rather a profound response to intrinsic human needs.

Psychologist Abraham H. Maslow’s “Hierarchy of Needs Theory," presented in his 1943 paper “A Theory of Human Motivation" published in Psychological Review, provides a framework for understanding technological demand. AI technology initially satisfied the requirements of enterprises and research institutions for efficiency and security (lower-level needs), but is now gradually ascending to the level of “Self-Actualization"—achieving more efficient creation, expression, and decision-making through personalized intelligent agents.

Perspective10 Years AgoNow
Need SatisfactionNeed AI to become strongerNeed AI to become more efficient, safer, more personalized
Main DriverResponding to higher-level personalization and autonomy needs
Main ContradictionInsufficient model capabilitiesEnergy, cost, diminishing marginal returns on computing power
Risk SourceModels not good enough, insufficient application scenariosModels too large, costs too high, efficiency mismatched with needs, privacy and control rights

This evolution of the hierarchy of needs demonstrates that AI development has never been self-driven by technology itself, but rather an inevitable product of the evolution of human societal need structures. When AI advances from satisfying corporate efficiency needs to satisfying individual autonomy and expression needs, its technological form must adjust accordingly.

1.2 Historical Insights: The Collapse of Complex Systems and Resource Reallocation Risk

This phenomenon of “when systems become larger, if efficiency improvements cannot offset operational costs" is similar to the views articulated by archaeologist Joseph A. Tainter in his 1988 book The Collapse of Complex Societies. Tainter argues that increasing social complexity brings diminishing marginal returns: to solve new problems, societies must invest more resources to maintain increasingly large and complex structures, until their operational costs exceed the benefits they provide. Tainter proposes that societies are problem-solving organizations that address a continuous flow of problems by increasing complexity. However, each addition of complexity has energy costs for creation and maintenance. Since societies first solve the easiest problems, complexity as a problem-solving strategy faces diminishing marginal returns—each further increase in complexity provides less problem-solving benefit while requiring more energy.

Mapping this theory to the AI field:

  • Complexity: Over-dependence of centralized super-large models (LLMs) on computing power, energy, and data centers.
  • Marginal Diminishing: Resources consumed in training larger models grow exponentially, but the magnitude of performance improvement gradually slows.
  • Reallocation Risk: When efficiency bottlenecks appear in centralized architectures, the technological center of gravity will be forced to migrate toward more efficient, more distributed architectures (such as edge AI).

This phenomenon has already begun to manifest in the AI field:

  • Explosive growth in energy costs: Training large models like GPT-3 requires millions of dollars in electricity costs, and as model scale expands, this cost increases super-linearly.
  • Marginal slowdown in performance improvement: From GPT-3 to GPT-4, model scale increased by about 10x, but performance improvement on many benchmarks was only about 20-30%.
  • Persistent pressure on inference costs: Although the cost per API call is declining, large-scale applications still face enormous financial pressure.

Therefore, the essential risk of AI is not bubble burst, but the risk of resource and power migration from “centralized" to “decentralized" architectures. Companies that over-invest in centralized large model infrastructure may find their investments becoming inefficient or even unsustainable under the new paradigm.

II. IBM → PC: A Paradigm Shift Driven by Humanity’s Desire for Control and Freedom

2.1 Historical Context: From Concentrated Power to Information Democratization

The current centralized structure of the AI industry is strikingly similar to the historical trajectory of the computer industry’s transformation from mainframes to personal computers (PCs) in the 1980s.

In the 1960s and 1970s, companies represented by IBM dominated the computer market. Mainframes were the center of computing, with control highly concentrated in corporate machine rooms and among a few IT experts. Historians exploring IBM and the personal computer revolution point out that the PC’s success was not purely technological progress, but more importantly, users’ deep desire for autonomy, privacy, and control.

EraPrimary Computing LocationControlRepresentative CompaniesParadigm Significance
Mainframe EraCorporate machine roomsCentrally concentratedIBMComputing and power concentration
PC EraPersonal terminalsIndividual autonomyMicrosoft / IntelInformation democratization, awakening of personal sovereignty
Today’s AICloud centersEnterprise model controlOpenAI, GoogleAI still in centralization phase
Future AIEdge AI / On-device AIPersonally held modelsNo true dominant player yetAI’s “personal sovereignization"

2.2 Three Major Drivers from Centralization to Decentralization

Today, large language models (LLMs) are at the “IBM moment" of that era: users must connect, depend on the cloud, and hand over data to platforms. This model of “centralized computing power → centralized data → centralized power" runs counter to humanity’s inherent need to “control one’s own data and privacy."

As technology advances, this centralized model will be broken, with three main drivers:

1. Maturation of Model Compression Technologies

  • Model Distillation: Transferring knowledge from large models to small models, retaining most performance while dramatically reducing parameters.
  • Quantization: Compressing model weights from 32-bit floating point to 8-bit or even 4-bit integers, significantly reducing memory requirements and computational costs.
  • Parameter-Efficient Fine-Tuning (such as LoRA, QLoRA): Adjusting only a small portion of model parameters, making personalized adjustments feasible.

2. Proliferation of Edge AI Accelerators

  • Apple Neural Engine: Dedicated AI processors integrated into iPhones and Macs.
  • Qualcomm AI Hub: AI computing platform optimized for mobile devices.
  • Dedicated Neural Processing Units (NPUs): More and more consumer-grade devices equipped with specialized AI hardware.

3. Rise of Privacy and Sovereignty Architectures

  • Local-first AI: Design philosophy prioritizing local data processing.
  • On-device LLM: Language models running entirely on personal devices, such as small versions of LLaMA and Mistral.
  • Federated Learning: Distributed machine learning while protecting privacy.

The mainstream of future AI may no longer be larger, more centralized cloud models, but rather “intelligent agents" that are more personal, more distributed, and more trustworthy.

III. 2D → 3D Design Tools Migration: The Inevitability of Computing Power Devolution

3.1 Historical Trajectory of Design Tool Evolution

The evolution from 2D design tools to 3D CAD/animation tools likewise confirms the inevitability that “software will ultimately flow from servers to personal terminals." The core driving element of this migration is: the shortening of distance between computing power (Power) and users (User).

PeriodOperating PlatformRepresentative Software & EcosystemAudience
Early 1990sWorkstation ServersMaya, early versions of 3DS MaxEnterprises and research institutions (extremely concentrated)
Mid-2000sHigh-end PCs3DS Max, Maya popular versionsStudios and professional creators
After 2010Personal PCs / LaptopsBlender, Unity, UnrealCitizen creators (computing power democratization)
After 2023Edge AI + 3D ComputingAI 3D generation, NeRF, Gaussian SplattingEveryone in the future (maximized autonomous expression)

3.2 Three Stages of Computing Power Democratization

This trajectory clearly shows three stages of computing power democratization:

Stage One: Professional Monopoly Period (1990s)

  • 3D creation required workstations worth tens of thousands of dollars
  • Software licensing fees were expensive (tens of thousands of dollars per suite)
  • Only large studios and research institutions could afford it
  • Creative power highly concentrated

Stage Two: Professional Popularization Period (2000s-2010s)

  • Personal computer performance improved, mid-range PCs could run professional software
  • Rise of open-source software (like Blender) lowered cost barriers
  • Independent studios and individual creators began to emerge
  • Creative power began to disperse

Stage Three: Universal Creation Period (2020s-)

  • AI-assisted creation tools dramatically lowered skill barriers
  • Cloud rendering services provided on-demand computing power
  • Real-time 3D generation technologies (like NeRF, Gaussian Splatting) made creation more intuitive
  • Creative power completely democratized

The ultimate destination of technology is to maximize individual autonomy and expression. When computing power barriers decrease to what personal devices can handle, whether text, images, or complex 3D models, the power to create will devolve from centralized institutions to individuals. AI’s future is to become everyone’s personal co-pilot for digital creation, not a centralized headquarters far away in the cloud.

IV. Conclusion: AI’s Future Key Is Not Scale, But Successful Paradigm Transition

AI’s future hinges on a comprehensive transition in intelligence structure and computational architecture.

ParadigmPast 10 Years (Early Centralization)Future 10 Years (Personalization)
FocusBigger is betterBetter is smaller, personal, trusted
GoalPursue strongest AI (AGI)Pursue most suitable AI (P-AGI: Personal AGI)
ArchitectureCloud-centricLocal/Edge-centric
RiskModel capabilities not strong enough, unable to find real applicationsFailure to successfully navigate paradigm shift, missing efficiency and market opportunities brought by decentralization

AI is not a bubble about to burst, but a technological system undergoing a painful transformation period. When AI is no longer “centralized models of a few companies" but rather “everyone’s intelligent agent" and “guardian of personal data sovereignty," true technological explosion, application popularization, and social transformation will begin.

The success or failure of this paradigm shift will determine AI technology’s future development direction:

Successful Transition: Will bring a more autonomous, more efficient, more rights-respecting AI ecosystem where everyone can have their own intelligent assistant without sacrificing privacy and autonomy.

Failed Transition: May lead to resource waste, technological stagnation, and even widespread social distrust of AI, ultimately hindering AI technology’s long-term development.

History tells us that true technological revolution does not come from larger, stronger systems, but from innovations that are closer to human nature and better aligned with societal needs. Just as the PC revolution liberated computing power from machine rooms to desktops, just as the internet liberated information from libraries to fingertips, AI’s future revolution will liberate intelligence from the cloud to everyone’s side.

引言

當前的科技界與金融市場不斷追問:「AI 泡沫是否即將來臨?」這種擔憂通常源於對市場狂熱與過度投資的歷史性警惕——從 2000 年的網路泡沫到 2008 年的金融危機,投資人對於「下一個崩盤」始終保持高度敏感。然而,正如本文所論,AI 的風險本質並非「熱度消失」的泡沫破裂,而是技術重心從一個典範(Paradigm)遷往另一個典範的結構性轉移

在 AI 領域,這場轉移的中心焦點正在從「追求模型規模」轉向「追求效率、掌控與個人化」。企業與開發者若將目光鎖定在單純的規模競爭,將可能成為這場新典範遷移中的遺落者。本文將從人性需求、歷史類比、計算架構—深入探討這場正在發生的典範轉移,並試著提供理解 AI 未來發展方向的理論框架。

一、 AI 並非需求泡沫,而是資源重分配的必然

1.1 理論脈絡:從人類需求到經濟成長

AI 浪潮的推力,從歷史和心理學的角度來看,從來不是單純的市場炒作,而是對人性的內在需求的深刻回應。

心理學家亞伯拉罕·馬斯洛(Abraham H. Maslow)在 1943 年發表於《心理學評論》的論文《人類動機理論》(A Theory of Human Motivation)中提出的「需求層次理論」,為我們理解科技需求提供了框架。AI 技術最初滿足的是企業和研究機構在效率、安全(底層需求)上的要求,而現今則逐漸上升到「自我實現」(Self-Actualization)的層次——透過個人化的智慧體,實現更高效的創作、表達和決策。

觀點10 年前現在需求滿足
主要推力需要 AI 變得更強需要 AI 變得更高效、更安全、更個人迴應更高層次的個人化與自主需求
主要矛盾模型能力不足能源、成本、算力邊際收益遞減
風險來源模型不夠好、應用場景不足模型太大、成本太高、效率不符需求、隱私與掌控權

這種需求層次的演進表明,AI 的發展從來不是技術本身的自我驅動,而是人類社會需求結構演變的必然產物。當 AI 從滿足企業的效率需求進階到滿足個人的自主與表達需求時,其技術形態也必須隨之調整。

1.2 歷史啟示:複雜系統的崩潰與資源重分配風險

這種「系統變大後,若效率提升無法抵銷維運成本」的現象,與考古學家約瑟夫·泰恩特(Joseph A. Tainter)在 1988 年出版的《複雜社會的崩潰》(The Collapse of Complex Societies)中闡述的觀點類似。泰恩特論證,社會複雜性的增加會帶來邊際效益遞減:為了解決新的問題,社會必須投入更多的資源來維護日益龐大且複雜的結構,直到其維運成本超過了其帶來的收益。泰恩特提出,社會是解決問題的組織,透過增加複雜性來解決源源不斷的問題流。然而,每增加一項複雜性都有創建和維護的能量成本。由於社會首先解決最容易的問題,作為解決問題策略的複雜性面臨邊際收益遞減,每進一步增加複雜性提供的問題解決效益較少,同時需要更多的能量。

將此理論映射到 AI 領域:

  • 複雜性:集中式的超大型模型(LLMs)對算力、能源、數據中心的過度依賴。
  • 邊際遞減:訓練更大模型所消耗的資源呈指數級增長,但性能提升的幅度卻逐漸放緩。
  • 重分配風險:當集中式架構的效率瓶頸出現時,技術重心將被迫向更高效、更分散的架構(如端側 AI)遷移。

這種現象在 AI 領域已經開始顯現:

  1. 能源成本的爆炸性增長:訓練 GPT-3 等大型模型需要數百萬美元的電力成本,且隨著模型規模的擴大,這一成本以超線性速度增長。
  2. 性能提升的邊際放緩:從 GPT-3 到 GPT-4,模型規模增長了約 10 倍,但在許多基準測試中的性能提升僅約 20-30%。
  3. 推理成本的持續壓力:每次 API 調用的成本雖然在下降,但大規模應用仍然面臨巨大的財務壓力。

因此,AI 的風險本質不是泡沫破裂,而是資源與權力從「中心化」向「去中心化」的架構遷移風險。那些過度投資於集中式大型模型基礎設施的企業,可能會發現自己的投資在新典範下變得效率低下甚至無法維持。

二、IBM → PC:人性渴望掌控與自由的典範遷移

2.1 歷史脈絡:從權力集中到資訊民主化

AI 產業當前的集中化結構,與 1980 年代電腦產業從大型主機(Mainframe)向個人電腦(PC)轉變的歷史軌跡驚人地相似。

在 1960 至 1970 年代,以 IBM 為代表的企業主宰了電腦市場。大型主機(Mainframe)是運算的中心,控制權高度集中於企業機房和少數 IT 專家手中。歷史學者在探討 IBM 與個人電腦革命時指出,PC 的成功不單純是技術進步,更重要的是使用者對自主、隱私與掌控權的深層渴望

時代主要運算地點控制權代表企業典範意義
大型主機時代企業機房中央集中IBM計算與權力集中
PC 時代個人終端個體自主Microsoft / Intel資訊民主化、個人主權覺醒
今日的 AI雲端中心企業模型控制OpenAI, GoogleAI 仍在集中化階段
未來的 AIEdge AI / 端側 AI個人自持模型仍未出現真正主宰者AI 的「個人主權化」

2.2 從集中到分散的三大驅動力

如今,大型語言模型(LLM)正處在當年的「IBM 時刻」:使用者必須連線、依賴雲端、將數據交給平台。這種「集中式算力 → 集中式數據 → 集中式權力」的模式,與人性中對「掌控自我數據與隱私」的需求背道而馳。

隨著技術進步,這種集中模式將被打破,主要驅動力包括:

1. 模型壓縮技術的成熟

  • 模型蒸餾(Model Distillation):將大型模型的知識轉移到小型模型中,保留大部分性能同時大幅減少參數量。
  • 量化(Quantization):將模型權重從 32 位浮點數壓縮到 8 位甚至 4 位整數,顯著降低記憶體需求和計算成本。
  • 參數高效微調(如 LoRA, QLoRA):只調整模型的一小部分參數,使得個人化調整變得可行。

2. 端側 AI 加速器普及

  • Apple Neural Engine:整合在 iPhone 和 Mac 中的專用 AI 處理器。
  • Qualcomm AI Hub:針對移動裝置優化的 AI 運算平台。
  • 專用神經網絡處理單元(NPU):越來越多的消費級裝置配備專門的 AI 硬體。

3. 隱私與主權架構興起

  • Local-first AI:優先在本地處理數據的設計理念。
  • On-device LLM:完全在個人裝置上運行的語言模型,如 LLaMA、Mistral 的小型版本。
  • 聯邦學習(Federated Learning):在保護隱私的前提下進行分散式機器學習。

未來 AI 的主流,可能不再是更大、更集中的雲端模型,而是更屬於個人、更分散、更可信賴的「智慧體」

三、2D → 3D 設計工具的遷移:算力下放的必然性

3.1 設計工具演進的歷史軌跡

從 2D 設計工具到 3D CAD/動畫工具的演進,同樣印證了「軟體終究會從伺服器流向個人終端」的必然性。這場遷移的核心驅動要素是:算力(Power)與使用者(User)之間距離的縮短

時期運作平台代表軟體與生態受眾
1990s 初期工作站 ServerMaya、3DS Max 初期版本企業與研究機構(極度集中)
2000s 中期高階 PC3DS Max、Maya 普及版工作室與專業創作者
2010 之後個人 PC / 筆電Blender、Unity、Unreal全民創作者(算力民主化)
2023 之後端側 AI + 3D 運算AI 3D 生成、NeRF、Gaussian Splatting未來每個人(自主表達最大化)

3.2 算力民主化的三個階段

這個軌跡清楚顯示了算力民主化的三個階段:

階段一:專業壟斷期(1990s)

  • 3D 創作需要價值數萬美元的工作站
  • 軟體授權費用高昂(每套數萬美元)
  • 只有大型工作室和研究機構能夠負擔
  • 創作權力高度集中

階段二:專業普及期(2000s-2010s)

  • 個人電腦性能提升,中階 PC 即可運行專業軟體
  • 開源軟體(如 Blender)的興起降低了成本門檻
  • 獨立工作室和個人創作者開始湧現
  • 創作權力開始分散

階段三:全民創作期(2020s-)

  • AI 輔助創作工具大幅降低技能門檻
  • 雲端渲染服務提供按需算力
  • 實時 3D 生成技術(如 NeRF、Gaussian Splatting)讓創作更加直觀
  • 創作權力完全民主化

技術的最終歸宿是最大化個人的自主與表達。當算力門檻降低到個人裝置可負荷時,無論是文字、圖像還是複雜的 3D 模型,創造權都將從中心化的機構下放給個人。AI 的未來,是成為每個人數位創作的貼身副駕,而非遠在雲端的集中式總部。

四、結論:AI 的未來關鍵不是規模,而是典範轉移的成功

AI 的未來,關鍵在於一場全面的智能結構、運算架構的轉移。

典範過去 10 年(早期集中化)未來 10 年(個人化)
焦點Bigger is better (越大越好)Better is smaller, personal, trusted (更小、個人化、可信賴)
目標追求最強 AI (AGI)追求最適 AI (P-AGI: Personal AGI)
架構Cloud-centric (雲端中心)Local/Edge-centric (本地/邊緣運算中心)
風險模型能力不夠強,無法找到真實應用未能成功駕馭典範轉移,錯失去中心化帶來的效率與市場

AI 不是即將破裂的泡沫,而是一個正在經歷痛苦轉型期的技術體系。當 AI 不再是「少數企業的集中模型」,而是「每個人的智慧代理人」(Agent)與「個人數據主權的守護者」時,真正的技術爆發、應用普及與社會變革才會開始。

這場典範轉移的成功與否,將決定未來 AI 技術的發展方向:

  • 成功的轉移:將帶來一個更自主化、更高效、更尊重個人權利的 AI 生態系統,每個人都能擁有自己的智慧助手,而不必犧牲隱私和自主權。
  • 失敗的轉移:可能導致資源浪費、技術停滯,甚至社會對 AI 的普遍不信任,最終阻礙 AI 技術的長期發展。

歷史告訴我們,真正的技術革命不是來自於更大、更強的系統,而是來自於更貼近人性、更符合社會需求的創新。正如 PC 革命將運算能力從機房解放到桌面,正如網際網路將資訊從圖書館解放到指尖,AI 的未來革命將把智慧從雲端解放到每個人身邊

參考文獻

Kuhn, T. S. (1962). The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
(提出典範轉移(Paradigm Shift)理論,為理解科學革命和技術變革提供了理論框架。本書已成為 20 世紀被引用最多的學術著作之一。)

Maslow, A. H. (1943). A Theory of Human Motivation. Psychological Review, 50(4), 370–396.
(提出需求層次理論,為論述技術背後的人性需求提供了理論基礎。)

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems 30 (NIPS 2017).
(提出 Transformer 架構,確立了 Token 與注意力機制在現代大型模型中的核心地位,是「Token 維度論」的技術起點。這篇論文徹底改變了自然語言處理領域,成為 GPT、BERT 等現代語言模型的基礎。)

Howard, J., & Ruder, S. (2018). Universal Language Model Fine-tuning for Text Classification. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 328-339.
(ULMFiT,證明了遷移學習在 NLP 中的極高效用,為後續模型小型化和微調奠定了實踐基礎。這項工作展示了即使是相對小型的模型,透過適當的微調也能達到優秀的性能。)

Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531.
(提出知識蒸餾(Knowledge Distillation)方法,為大型模型壓縮到小型模型提供了理論基礎和實踐方法。)

Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., & Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models. arXiv preprint arXiv:2106.09685.
(提出 LoRA 方法,使得大型語言模型的個人化微調變得極其高效,僅需調整少量參數即可實現良好效果。)

Dettmers, T., Pagnoni, A., Holtzman, A., & Zettlemoyer, L. (2023). QLoRA: Efficient Finetuning of Quantized LLMs. arXiv preprint arXiv:2305.14314.
(結合量化和 LoRA 技術,進一步降低了大型模型微調的硬體需求,使得在消費級硬體上微調十億參數級模型成為可能。)

Ceruzzi, P. E. (2003). A History of Modern Computing (2nd ed.). Cambridge, MA: MIT Press.
(全面記錄了計算機歷史,從大型主機到個人電腦的轉變,為理解技術民主化趨勢提供了歷史視角。)

Campbell-Kelly, M., & Aspray, W. (2004). Computer: A History of the Information Machine (2nd ed.). Boulder, CO: Westview Press.
(詳細探討了計算機產業的演變,包括 IBM 主導的大型主機時代和個人電腦革命,為本文的歷史類比提供了實證基礎。)

McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 1273-1282.
(提出聯邦學習(Federated Learning)概念,為在保護隱私前提下進行分散式機器學習提供了技術框架。)

Kairouz, P., McMahan, H. B., Avent, B., et al. (2021). Advances and Open Problems in Federated Learning. Foundations and Trends in Machine Learning, 14(1-2), 1-210.
(全面綜述聯邦學習領域的進展和挑戰,為去中心化 AI 的未來發展提供了路線圖。)

Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge Computing: Vision and Challenges. IEEE Internet of Things Journal, 3(5), 637-646.
(探討邊緣運算的願景和挑戰,為理解 AI 從雲端向端側遷移的技術基礎提供了框架。)

Zhou, Z., Chen, X., Li, E., Zeng, L., Luo, K., & Zhang, J. (2019). Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing. Proceedings of the IEEE, 107(8), 1738-1762.
(提出邊緣智慧(Edge Intelligence)概念,論述了 AI 與邊緣運算結合的必然性和優勢。)

Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707.
(提出 AI 倫理框架,探討 AI 技術對社會的影響,強調個人自主權和數據權利的重要性。)

Brown, T. B., Mann, B., Ryder, N., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp. 1877-1901.
(GPT-3 論文,展示了大型語言模型的能力,同時也揭示了規模化路徑的成本和挑戰。)

Jiang, A. Q., Sablayrolles, A., Mensch, A., et al. (2023). Mistral 7B. arXiv preprint arXiv:2310.06825.
(Mistral 7B 證明了小型高效模型可以在許多任務上達到甚至超越更大模型的性能,支持了本文關於效率優先的論點。)

Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2024). Artificial Intelligence Index Report 2024. Stanford University.
(年度 AI 指數報告,提供 AI 發展趨勢、投資、倫理等多方面的數據和分析。)

發表留言

趨勢