无主之地2配置高吗|看真人裸体BBBBB|秋草莓丝瓜黄瓜榴莲色多多|真人強奷112分钟|精品一卡2卡3卡四卡新区|日本成人深夜苍井空|八十年代动画片

網易首頁 > 網易號 > 正文 申請入駐

活動預告丨第三屆世界音樂人工智能大會將于4月25-26日在京舉辦

0
分享至


第三屆世界音樂人工智能大會(The Third Summit on Music Intelligence)將于2026年4月25日至26日在北京中央音樂學院舉辦。大會將匯聚全球音樂人工智能領域的學術領軍人物及音樂大模型領域的代表性企業,凝聚智慧、拓展視野,共同探索未來音樂的發展圖景。會議將聚焦前沿技術進展與產業熱點,搭建高水平交流平臺,推動音樂人工智能在“產、學、研、用”各環節的深度融合,服務北京發展、助力國家戰略,與世界攜手一同開創音樂與智能融合的未來。

01

會議基本信息

會議時間:2026年4月25日—26日

會議地點:中央音樂學院(演奏廳)

主辦單位:中國人工智能學會、中央音樂學院

承辦單位:中國人工智能學會藝術與人工智能專委會、中央音樂學院音樂人工智能與音樂信息科技系

02

開幕式

2026年4月26日(Sun.)

09:00—11:35 中央音樂學院琴房樓演奏廳

Recital Hall of the Practice Building, CCOM

第三屆世界音樂人工智能大會開幕式

SOMI2026 Opening Ceremony

主持人: 馬華東

中國人工智能學會副理事長、北京郵電大學學術委員會副主任/講席教授、國家級高層次人才

Ma Huadong

Vice President of the Chinese Association for Artificial Intelligence, Vice Chair of the Academic Committee and Chair Professor at Beijing University of Posts and Telecommunications, National High-Level Talent Awardee

大會主旨報告 Keynote Speech

· 管曉宏 Guan Xiaohong

音樂智能量化與認知的研究進展
Progress on Computational Intelligence and Quantitate Cognition of Music

· 克里斯·查菲 Chris Chafe
聆聽數據:以數據聲化為音樂與科學打造定制化計算機音樂網絡應用
Listening to Data: Creating Custom Computer Music Webapps for Music and Science through Data Sonification

· 喬治·海杜 Georg Hajdu
從音樂廳到社會空間:借助技術重新語境化當代音樂
From Concert Hall to Social Space: Recontextualizing Contemporary Music through Technology

· 李小兵 Li Xiaobing
機文主義:音樂學院的未來在哪里?
Machinism: Where Is the Future of Music Conservatories?

03

會議日程安排

2026年4月25日(Sat.)

10:00—18:00 中央音樂學院西門

West Gate, CCOM

報到注冊 Registration

14:00—16:00 中央音樂學院教學樓701

701,AcademicBuilding, CCOM

第三屆世界音樂人工智能大會青年論壇

SOMI2026 Youth Forum

· 戴琮人 Dai Congren
音樂全譜理解基準:大模型對完整樂譜理解能力的評測與分析
Musical Score Understanding Benchmark: Evaluating Large Language Models' Comprehension of Complete Musical Scores

· 丘治平 Qiu Zhiping
音頻驅動的弦樂演奏動作生成
ELGAR: Expressive Cello Performance Motion Generation for Audio Rendition

· 童心怡 Tong Xinyi
音畫共鳴:視頻配樂生成的視覺畫面、時間節奏與音樂表達對齊
Video Echoed in Music: Semantic, Temporal, and Rhythmic Alignment for Video-to-Music Generation

· 吳尚達 Wu Shangda
CLaMP 3:跨未對齊模態與未見語言的通用音樂信息檢索
CLaMP 3: Universal Music Information Retrieval Across Unaligned Modalities and Unseen Languages

16:00—17:00 中央音樂學院教學樓717

717, Academic Building, CCOM

中國人工智能學會藝術與人工智能專委會會議(閉門會議)

Meeting of the Art and Artificial Intelligence Technical Committee, Chinese Association for Artificial Intelligence(Closed-door Meeting)

2026年4月26日(Sun.)

09:00—11:35 中央音樂學院琴房樓演奏廳 Recital Hall of the Practice Building, CCOM

第三屆世界音樂人工智能大會開幕式,領導致辭,主旨報告

SOMI2026 Opening Ceremony, Opening Remarks, and Keynote Speech

14:00—15:30 中央音樂學院琴房樓演奏廳

Recital Hall of the Practice Building, CCOM

第三屆世界音樂人工智能大會學術論壇

SOMI2026 Academic Forum

· 劉家豐 Liu Jiafeng
統一聲學 Token 空間的音樂生成大模型:從深層表征到高質量生成
Large-Scale Music Generation Model with a Unified Acoustic Token Space: From Deep Representations to High-Quality Generation

· 馬軍 Ma Jun
音樂腦機接口:概念、研究進展與應用前景
Music Brain–Computer Interfaces: Concepts, Research Progress, and Application Prospects

· 盧迪 Lu Di
面向 AI 生成音樂工作流的 Web DAW
A Web-based DAW for AI-generated Music Workflow

· 肯尼斯·菲爾茲 Kenneth Fields

面向網絡化電子音樂合奏的 AI 編程助手的實踐應用

Practical Applications of AI Coding Assistants for Networked Electronic Music Ensembles

· 亞倫·威廉姆森 Aaron Williamon
表演科學的未來
The Future of Performance Science

· 凱特·霍普 Cat Hope
確保音樂領域人工智能政策的包容性與可持續性:一個治理議題
Ensuring Inclusive and Sustainable AI Policy for The Music Sector: A Governance Issue

15:40—17:00 中央音樂學院琴房樓演奏廳

Recital Hall of the Practice Building, CCOM

第三屆世界音樂人工智能大會產業論壇

SOMI2026 Industry Forum

· 徐帆 Xu Fan
寫歌,正在從創作變成選擇嗎?
Is Songwriting Becoming Selection Rather Than Creation?

· 龔俊民 Gong Junmin
推動開源音樂生成的邊界
Pushing the Boundaries of Open-Source Music Generation

· 姜濤 Jiang Tao
從一杯奶茶到音樂創作和消費的 Agent
From a Cup of Milk Tea to Agents for Music Creation and Consumption

· 劉曉光 Liu Xiaoguang
AI 賦能音樂教育
AI-Empowered Music Education

17:00—18:00 中央音樂學院琴房樓演奏廳

Recital Hall of the Practice Building, CCOM

第三屆世界音樂人工智能大會主題交流、閉幕式

SOMI2026 Panel Discussion & Closing Ceremony

9:00—19:00 電子音樂馬拉松(線上)

SOMI2026 Electronic Music Marathon (Online)

03

嘉賓介紹

主旨報告嘉賓

Keynote Speakers

管曉宏

Guan Xiaohong


管曉宏,中國科學院院士,IEEE Fellow,分別于1982、1985年獲清華大學學士與碩士學位,1993年獲美國康涅狄格大學博士學位;1993-1995年任美國PG&E公司高級顧問工程師,1999-2000年任哈佛大學訪問科學家,1995年起任西安交通大學教授,2008-2025任電子與信息工程學院院長、電子與信息學部主任;自2001年任清華大學講席教授組成員,2003-2008年任清華大學自動化系主任;中央音樂學院音樂人工智能與信息科學團隊成員。管曉宏院士主要從事復雜網絡化系統的經濟性與安全性,電力、能源、制造系統優化,信息物理融合系統,網絡空間信息安全等領域的研究,同時開展音樂智能量化和信息處理的研究,曾獲2005年、2018年國家自然科學二等獎,2019年何梁何利科技進步獎及多項國際學術獎勵。近年來,管曉宏院士與中央音樂學院、西安音樂學院合作,擔任中央音樂學院博士生導師,探討藝術與科學的關系和相互影響,在音樂智能量化領域取得重要研究成果,并創辦了“藝術與科學的交匯”系列音樂會。

Guan Xiaohong, the member of Chinese Academy of Science and the Fellow of IEEE, received his B.S. and M.S. degrees from Tsinghua University, Beijing, China, in 1982 and 1985, respectively, and his Ph.D. degree from the University of Connecticut in 1993. He was a senior consulting engineer with Pacific Gas and Electric from 1993 to 1995. He visited the Division of Engineering and Applied Science, Harvard University from 1999 to 2000. From 1985 to 1988 and since 1995 he has been with Xian Jiaotong University, Xian, China as the Cheung Kong Professor of Systems Engineering since 1999, and from 2008 to 2025 as Dean of Faculty of Electronic and Information Engineering. From 2001 he has also been with the Center for Intelligent and Networked Systems, Tsinghua University, Beijing, China, and severed the Head of Department of Automation, Tsinghua University, 2003-2008. Professor Guan is also with Department of Music AI and Information Science, Central Conservatory of Music of China.

克里斯·查菲

Chris Chafe


現任斯坦福大學音樂系 Duca Family Professor of Music(杜卡家族音樂教授),并擔任斯坦福大學計算機音樂與聲學研究中心(CCRMA)主任。他是一位作曲家、即興演奏家和大提琴演奏家,長期致力于音樂、計算技術、現場表演與聲學研究交叉領域的探索,在國際計算機音樂領域具有重要影響力。Chafe 教授的研究涵蓋計算機音樂、數字聲音合成、實時演奏系統以及網絡化音樂表演等方向,尤其在低延遲網絡協作演奏與遠程音樂表演方面具有開創性貢獻。他的工作不斷拓展技術條件下音樂創作、演奏協作與聽覺感知的邊界,同時也涉及聲音在科學與醫學場景中的應用。作為作曲家與演奏者,Chris Chafe 創作了大量融合器樂實踐與技術實驗的作品。其大提琴演奏與即興背景深刻影響了他的藝術語言,使其作品始終保持鮮明的現場性、互動性與探索性。除斯坦福大學外,他還曾在英屬哥倫比亞大學、都靈理工大學和柏林工業大學等國際機構擔任訪問學者或客座教授。憑借多年來在創作、科研、教學與學術平臺建設方面的持續投入,Chris Chafe 已成為全球音樂科技與跨學科聲音研究領域的重要代表人物之一。

Chris Chafe is a composer, improvisor, and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). In 2019, he was International Visiting Research Scholar at the Peter Wall Institute for Advanced Studies The University of British Columbia, Visiting Professor at the Politecnico di Torino, and Edgard-Varèse Guest Professor at the Technical University of Berlin. At IRCAM (Paris) and The Banff Centre (Alberta), he has pursued methods for digital synthesis and network music performance. CCRMA’s jacktrip project involves live concertizing with musicians the world over.
Chris Chafe is the Duca Family Professor of Music at Stanford University and Director of the Center for Computer Research in Music and Acoustics (CCRMA). A composer, improviser, and cellist, he is internationally recognized for his pioneering work at the

intersection of music, computation, performance, and acoustic research. At Stanford, his creative and scholarly activities have long explored how computer-based technologies can expand musical expression, collaboration, and listening. Professor Chafe’s research

spans computer music, digital sound synthesis, real-time performance systems, and networked music performance. He is especially known for developing methods for ultra-low-latency musical interaction over networks, helping to shape new possibilities for distributed ensemble performance and telematic music-making. His work has also engaged questions of auditory perception, human-computer interaction, and sound in scientific and medical contexts. As a composer and performer, Chafe has created a wide range of works that combine instrumental practice with technological experimentation. His background as a cellist and improviser has remained central to his artistic identity, informing a body of work in which live musicianship and computational processes interact closely. In addition to his activities at Stanford, he has held major international visiting appointments, including positions at the University of British Columbia, the Politecnico di Torino, and the Technical University of Berlin. Through decades of creative research, teaching, and institution-building, Chris Chafe has made lasting contributions to the global development of computer music and interdisciplinary sound studies. He continues to be an influential figure in shaping conversations around music technology, artistic innovation, and the future of musical performance.

喬治·海杜

Georg Hajdu


Georg Hajdu,德國作曲家、學者,現任德國漢堡音樂與戲劇學院(HfMT)教授,利蓋蒂中心主任。他長期致力于音樂、科技與跨學科藝術實踐的融合研究,關注多媒體作曲、網絡化音樂、生成式音樂系統與數字樂譜等方向。Hajdu早年在科隆學習分子生物學與作曲,后于美國加州大學伯克利分校完成博士學位,并與CNMAT保持緊密關聯,是當代音樂科技領域具有代表性的學者與作曲家之一。

Georg Hajdu is a German composer and scholar, currently serving as Professor of Multimedia Composition at the Hochschule für Musik und Theater Hamburg (HfMT). He also leads the Ligeti Center and serves as the university’s Commissioner for Research and Transfer. His work focuses on the intersection of music, technology, and interdisciplinary artistic practice, with particular interests in multimedia composition, networked music, generative systems, and digital score environments. Hajdu studied molecular biology and composition in Cologne and later earned his PhD at the University of California, Berkeley, in close connection with CNMAT.

李小兵

Li Xiaobing


李小兵,中央音樂學院教授、博士生導師、音樂人工智能系主任,國家哲學社會科學領軍人才、中宣部“四個一批人才”、享受政府特殊津貼專家、國家社科重大項目首席專家、中國人工智能學會會士、藝術與人工智能專委會主任,中國計算機學會理事、計算藝術分會主任,“全國高校黃大年式教師團隊”負責人。作曲博士、畢業于中央音樂學院作曲系,師從著名作曲家、中國音樂家協會名譽主席、中央音樂學院名譽院長吳祖強教授,音樂創作涵蓋幾乎所有音樂類型,部分作品受到群眾喜愛具廣泛影響力,曾榮獲金鐘獎、文華大獎、文華作曲獎、全國歌劇、舞劇一等獎、中宣部“五個一工程”獎等國內外獎項。

Professor and Doctoral Supervisor at the Central Conservatory of Music, Director of the Department of Music Artificial Intelligence, National Leading Talent in Philosophy and Social Sciences, recipient of the Central Propaganda Department’s "Four Kinds of Talents" award, expert entitled to special government allowances, Principal Investigator of major national social science projects, Fellow of the Chinese Association for Artificial Intelligence (CAAI) and Chair of the Art and Artificial Intelligence Commission, Council Member of the China Computer Federation (CCF) and Chair of the Computational Art Branch. He also leads the "National Huang Danian-style Faculty Team" in higher education. A Doctor of Composition, Li Xiaobing graduated from the Composition Department of the Central Conservatory of Music, where he studied under the renowned composer Professor Wu Zuqiang, Honorary President of the Chinese Musicians Association and the Central Conservatory of Music. His musical creations span almost all genres, with works enjoying wide popularity and significant influence. He has been honored with numerous domestic and international awards, including the Golden Bell Award, the Wenhua Grand Prize, the Wenhua Composition Award, first prizes in national opera and dance drama competitions, and the "Five One Project" Award from the Central Propaganda Department.

學術論壇嘉賓

Academic Forum Speakers

劉家豐

Liu Jiafeng


劉家豐,中央音樂學院音樂人工智能與音樂信息科技系副教授。博士畢業于中央音樂學院,中國首個音樂人工智能博士,師從俞峰教授、孫茂松教授。自幼跟隨四川音樂學院鋼琴系教授學習,本碩期間曾任校交響樂團首席鋼琴。致力于研究多軌道音樂生成、音樂音頻信號處理,多模態音樂大模型等前沿方向。提出了世界首個端到端交響樂生成模型,CCOM聲源分離訓練與推理框架的研發人,Sound Demixing Challenge 2023 國際音樂聲源分離大賽冠軍。

Jiafeng Liu is an Associate Professor in the Department of Music AI and Music Information Technology at the Central Conservatory of Music, as well as an AI researcher and pianist. He focuses on multi-track music generation, having proposed the world’s first end-to-end symphony generation model. He also conducts in-depth research in music source separation and won first place in the Sound Demixing Challenge 2023. Currently, he devotes his research efforts to large-scale multimodal music generation models.

馬軍

Ma Jun


馬軍,中央音樂學院音樂人工智能與音樂信息科技系講師。博士畢業于北京大學神經科學研究所,并于圣路易斯華盛頓大學麻醉系完成博士后訓練。擁有超過11年的侵入式與非侵入式腦機接口研發經驗。現主要研究方向為音樂腦機接口、基于腦科學的個性化音樂治療、音樂處理的神經機制。

Ma Jun is a Lecturer in the Department of Music Artificial Intelligence and Music Information Technology at the Central Conservatory of Music. He received his Ph.D. from the Institute of Neuroscience at Peking University and completed postdoctoral training in the Department of Anesthesiology at Washington University in St. Louis. He has more than 11 years of experience in the research and development of both invasive and non-invasive brain-computer interfaces. His current research focuses on music brain-computer interfaces, personalized music therapy based on neuroscience, and the neural mechanisms underlying music processing.

盧迪

Lu Di


盧迪,中央音樂學院音樂人工智能與音樂信息科技系助理研究員,東京大學情報理工學系碩士。國內首個商業歌聲合成軟件及首個自動作曲軟件的核心開發者,擁有15年音樂+計算機交叉領域從科研到商業落地的全流程經驗,持有多項軟件著作權與專利。

Assistant Researcher at the Department of Music AI and Music Information Technology, Central Conservatory of Music. He holds a Master's degree from the Department of Information Science and Technology, the University of Tokyo. He is the core developer of China's first commercial singing voice synthesis software and the first automatic composition software, with 15 years of full-cycle experience from research to commercial deployment in the interdisciplinary field of music and computer science. He holds multiple software copyrights and patents.

肯尼斯·菲爾茲

Kenneth Fields


Kenneth Fields(博士)現任中國科學院大學媒體藝術教授,在音樂與科技領域擁有豐富經驗。2003年至2023年,他曾任中央音樂學院外籍教授。2008年至2013年,他擔任加拿大卡爾加里大學“遠程媒體藝術”加拿大研究講席教授。Fields教授現為《Organised Sound》(劍橋大學出版社)編委,以及電子音樂研究亞洲網絡(EMSAN)編委。同時,他還是2021—2026年國際歐洲研究委員會(ERC)項目“The Digital Score”的聯合首席研究者,該項目聚焦于音樂樂譜的本質與技術研究。

Kenneth Fields (Ph.D) is a Professor of Media Arts at the University of the Chinese Academy of Sciences, having rich experience in the field of Music and Technology. From 2003 to 2023, he was a Foreign Professor at the Central Conservatory of Music in Beijing. From 2008 to 2013, he was the Canada Research Chair in Telemedia Arts at the University of Calgary. Prof. Fields serves on the editorial boards of the Journal of Organized Sound (Cambridge Univ. Press) and the Electronic Music Studies Asia Network (EMSAN). He is Co-PI in the international 2021-26 European Research Council Grant, entitled: The Digital Score which is an investigation into the nature and technology of music scores.

亞倫·威廉姆森

Aaron Williamon


Aaron Williamon是Royal College of Music 表演科學教授,并擔任表演科學中心(Centre for Performance Science, CPS)主任。該中心由皇家音樂學院與 Imperial College London 共同合作建立。他于2000年加入皇家音樂學院擔任研究員,2004年晉升為高級研究員,并于2010年被任命為表演科學教授。他的研究主要關注高水平表演能力,以及將科學研究應用于音樂學習與教學的實踐,同時也探討音樂與藝術對社會的影響。Aaron 是國際表演科學研討會(International Symposium on Performance Science)的創始人之一,同時也是學術期刊 Performance Science(隸屬于 Frontiers )的創刊主編,并擔任“健康音樂學院”(Healthy Conservatoires)國際網絡的創始主席。該網絡成立于2015年,旨在支持學生及專業表演藝術家的健康與福祉。他是 Royal Society of Arts 會士(FRSA)以及英國高等教育學會 AdvanceHE 會士(FHEA)。2008年,他被授予皇家音樂學院榮譽會員(HonRCM)。

Aaron Williamon is Professor of Performance Science at the Royal College of Music (RCM) where he directs the Centre for Performance Science (CPS), a partnership of the RCM and Imperial College London. Aaron joined the RCM as Research Fellow in 2000 and was appointed Senior Research Fellow in 2004 and Professor of Performance Science in 2010. His research focuses on skilled performance and applied scientific initiatives that inform music learning and teaching, as well as the impact of music and the arts on society. Aaron is the founder of the International Symposium on Performance Science, founding chief editor of Performance Science (a Frontiers journal), and the founding chair of Healthy Conservatoires, an international network constituted in 2015 to support health and wellbeing among student and professional performing artists. Aaron is a fellow of the Royal Society of Arts (FRSA) and the UK’s higher education academy, AdvanceHE (FHEA), and in 2008, he was elected an Honorary Member of the Royal College of Music (HonRCM).

凱特·霍普

Cat Hope


當代作曲家、電子音樂與數字樂譜研究者,長期從事實驗音樂創作、非傳統記譜與數字樂譜(Digital Score)研究。其學術與創作實踐強調演奏者能動性、空間化記譜以及新型樂譜界面在當代音樂創作中的作用。曾擔任澳大利亞國家級藝術與研究項目負責人,并在國際會議(如 TENOR、ICMC 等)和重要藝術機構中持續推動音樂、技術與文化政策的交叉研究。

Contemporary composition; Electronic music; Digital Score research; Experimental and non-traditional notation; Performer agency and spatialised notation; Music–technology–policy interdisciplinary research.

產業論壇嘉賓

Industry Forum Speakers

徐帆

Xu Fan


徐帆,目前在 Suno 從事生成式人工智能驅動的音樂創作產品與工程工作,負責將核心模型能力轉化為面向用戶的創作工具。作為早期創始團隊一員,他參與了公司網頁端與移動端產品從0到1的設計與開發。在此之前,他在 Meta 擔任資深軟件工程師,參與大規模數據系統以及 Meta Reality Labs 相關產品的開發。本科畢業于北京大學。

Xu Fan is currently working at Suno, where he focuses on generative AI–driven music creation products and engineering, translating core model capabilities into user-facing creative tools. As an early member of the founding team, he contributed to the design and development of the company’s web and mobile products from 0 to 1. Prior to this, he served as a Senior Software Engineer at Meta, where he worked on large-scale data systems as well as products related to Meta Reality Labs. He received his bachelor’s degree from Peking University.

龔俊民

Gong Junmin


龔俊民,ACE Studio 合伙人,ACE-Step 開源音樂生成模型系列作者。算法工程師出身,先后就職于多家頭部科技公司,長期專注于音頻與音樂生成方向。業余編曲、寫詞,與作詞人方文山同門。一直相信做 AI 音樂最重要的不是模型本身,而是和真正懂音樂的人一起工作。

Gong Junmin is a partner at ACE Studio and the creator of the ACE-Step open-source music generation model series. With a background as an algorithm engineer, he has worked at several leading technology companies and has long focused on audio and music generation. In addition to his technical work, he is an amateur composer and lyricist, and is part of the same mentorship lineage as renowned lyricist Fang Wenshan. He firmly believes that the most critical factor in AI music is not the model itself, but collaborating with people who truly understand music.

姜濤

Jiang Tao


姜濤,本、碩、博畢業于哈爾濱工業大學,有多年的AI和音頻算法研發經驗,及工程團隊管理經驗。先后在快手、騰訊音樂、昆侖萬維組建了國內領先的音樂和音頻算法團隊,基于相關算法的產品功能已經服務于千萬級用戶。23年~24年期間,帶領團隊成為國內首家實現類suno音樂生成模型,并產品化服務于用戶;在騰訊音樂期間,首創了K歌多維度評價、臻品音質等核心音樂消費功能,同時塑造了小天、小琴兩款現象級虛擬歌手;在快手期間,完成了國內首個端到端AI音樂生成app(小森唱),作品原聲、智能配樂、音悅臺等核心創作和消費功能。有多段相關方向的創業經歷。

Jiang Taoben received his bachelor’s, master’s, and doctoral degrees from Harbin Institute of Technology. He has many years of experience in AI and audio algorithm research and development, as well as in engineering team management. He has successively built leading music and audio algorithm teams in China at Kuaishou, Tencent Music, and Kunlun Wanwei, with products powered by these technologies serving tens of millions of users. Between 2023 and 2024, he led his team to become the first in China to develop a Suno-like music generation model and successfully productize it for users. During his time at Tencent Music, he pioneered core music consumption features such as multi-dimensional evaluation for karaoke and premium audio quality, and helped create two breakout virtual singers, Xiaotian and Xiaoqin. At Kuaishou, he developed China’s first end-to-end AI music generation app (Xiaosen Chang), featuring key creative and consumption functionalities such as original soundtrack generation, intelligent accompaniment, and music video platforms. He also has multiple entrepreneurial experiences in related fields.

劉曉光

Liu Xiaoguang


劉曉光,DeepMusic CEO,清華大學化學系本碩博,清華企業家協會青創會員編曲師,鍵盤手,吉他手。有100+首音樂作品創作及制作經驗,作品全網播放量數億次。有多年音基教育經驗。

Liu Xiaoguang, CEO of DeepMusic Ph.D., M.S., and B.S. in Chemistry from Tsinghua University. Member of the Tsinghua Entrepreneur & Executive Club. Arranger, keyboardist, and guitarist. With over 100 original music compositions and production credits, his works have achieved hundreds of millions of streams across major platforms. Has years of experience in music fundamentals education.

青年論壇嘉賓

Youth Forum Speakers

戴琮人

Dai Congren


戴琮人,中央音樂學院與清華大學聯合培養博士研究生(一年級在讀),師從李小兵教授與孫茂松教授。此前先后獲得紐芬蘭紀念大學軟件工程學士學位、倫敦國王學院數據科學理學碩士學位,以及倫敦帝國理工學院人工智能與機器學習研究型碩士學位。曾在多家企業從事人工智能與數據相關工作,包括于橡鹿機器人有限公司擔任大模型算法工程師,在 Google 從事計算機視覺算法研發,并于英國 720 Management Ltd. 與萬聯證券擔任數據分析師,在玉柴股份有限公司擔任全棧工程師,具備跨領域的工程與研究經驗。

Dai Congren is a first-year joint PhD student at the Central Conservatory of Music and Tsinghua University, supervised by Professor Xiaobing Li and Professor Maosong Sun. He previously received a Bachelor's degree in Software Engineering from Memorial University of Newfoundland, an MSc in Data Science from King's College London, and an MRes in Artificial Intelligence and Machine Learning from Imperial College London. He has worked in artificial intelligence and data-related roles across multiple organisations, including serving as an LLM Algorithm Engineer at Oak Deer Robotics Co., Ltd., conducting computer vision algorithm research and development at Google, working as a Data Analyst at 720 Management Ltd. in the United Kingdom and Wanlian Securities, and serving as a Full-Stack Engineer at Yuchai Co., Ltd. He has built broad interdisciplinary experience spanning both engineering and research.

丘治平

Qiu Zhiping


丘治平,中央音樂學院與清華大學聯合培養在讀博士研究生,師從中央音樂學院俞峰教授與清華大學戴瓊海教授,長期致力于探索音樂與具身智能的交叉領域,涵蓋從多模態感知、精細化動作生成到具身物理執行的完整路徑。

Qiu Zhiping is a joint Ph.D. candidate at the Central Conservatory of Music and Tsinghua University, co-advised by Prof. Feng Yu and Prof. Qionghai Dai. He has long been dedicated to the intersection of music and embodied AI, covering the complete trajectory from multimodal perception and fine-grained motion generation to physical execution.

童心怡

Tong Xinyi


童心怡,中央音樂學院與北京大學聯合培養在讀博士 ,師從北京大學朱松純教授與中央音樂學院俞峰教授,主要研究方向為多模態生成音樂生成,并致力于探索人工智能對音樂概念的建模,以及跨模態對齊藝術表達對齊問題 。以第一作者身份在AAAI(Oral Presentation)、CVPR及IEEE TCSS等國際學術會議與期刊發表多篇論文 ,并作為參與撰寫出版教材《音樂的人工智能 U-V理論》,參與講授北京大學通選課程 《人工智能與音樂》。曾榮獲首屆國際通用人工智能大會優秀成果獎、教育部中美青年創客大賽主賽道一等獎等,持續在人工智能與藝術交叉前沿探索深層融合的可能 。

Tong Xinyi is a joint Ph.D. candidate at the Central Conservatory of Music and Peking University, co-advised by Prof. Song-Chun Zhu and Prof. Feng Yu. Her primary research focuses on multimodal music generation, with a strong dedication to the AI-driven modeling of musical concepts and the cross-modal alignment of artistic expressions. She has published multiple papers in premier international conferences and journals, including AAAI (Oral Presentation), CVPR, and IEEE TCSS. Beyond her research, she co-authored the textbook Artificial Intelligence in Music: U-V Theory and co-lectures the general elective course "Artificial Intelligence and Music" at Peking University.

吳尚達

Wu Shangda


吳尚達博士現任職于國內領先的互聯網企業,致力于語音大模型領域的算法研究。他于2025年6月畢業于中央音樂學院,獲音樂人工智能與信息科技博士學位,師從清華大學孫茂松教授與中央音樂學院俞峰教授。此前,他分別于2021年和2019年獲得中山大學軟件工程碩士學位及星海音樂學院鋼琴表演學士學位。他的研究長期深耕人工智能與音樂的交叉領域,尤其在音樂生成與音樂信息檢索(MIR)方向產出了多項成果。作為第一或共同第一作者,他在ACL、NAACL、IJCAI、ICASSP及ISMIR等人工智能與音樂領域的國際頂尖會議及期刊上發表了多篇學術論文。其代表性工作包括CLaMP系列多模態檢索模型、NotaGen及ChatMusician等。憑借出色的科研能力,他曾榮獲2023年國際音樂信息檢索大會(ISMIR 2023)最佳學生論文獎,并先后獲評2024年國家研究生一等學業獎學金及2025年北京市優秀畢業生。在正式投身工業界研究之前,吳尚達博士曾先后在微軟亞洲研究院(MSRA)、微軟Azure Cloud及字節跳動Seed-Music實驗室擔任研究實習生。他致力于深化跨學科研究,旨在通過與領域專家的緊密合作,持續推動音樂人工智能這一前沿科技領域的發展與突破。

Dr. Shangda Wu is currently a research scientist at a leading domestic internet company, specializing in algorithm research for speech large language models. He obtained his Ph.D. in Music Artificial Intelligence and Information Technology from the Central Conservatory of Music in June 2025, where he was co-advised by Prof. Maosong Sun (Tsinghua University) and Prof. Feng Yu (Central Conservatory of Music). Dr. Wu holds a unique interdisciplinary background, having earned a Master of Science in Software Engineering from Sun Yat-sen University in 2021 and a Bachelor of Music in Piano Performance from the Xinghai Conservatory of Music in 2019. His research is deeply rooted in the intersection of AI and music, with significant contributions in music generation and music information retrieval (MIR). As a first or co-first author, he has published multiple papers in top-tier international conferences and journals in the fields of AI and acoustics, including ACL, NAACL, IJCAI, ICASSP, and ISMIR. His representative works include the CLaMP series of multimodal retrieval models, NotaGen, and ChatMusician. Recognized for his academic excellence, he was honored with the Best Student Paper Award at ISMIR 2023 and has been a recipient of the First-Class National Graduate Academic Scholarship (2024) and the title of Outstanding Graduate of Beijing (2025). Prior to his current role in the industry, Dr. Wu gained extensive experience as a research intern at Microsoft Research Asia (MSRA), Microsoft Azure Cloud, and ByteDance (Seed-Music). He is dedicated to advancing interdisciplinary research and aspires to push the frontiers of music AI through collaborative innovation with global experts.


組委會

大會主席

于紅梅


共同主席

戴瓊海


執行主席

李小兵


名譽主席

俞 峰


Jean-Michel Jarre


郭毅可院士


管曉宏院士


程序委員會(按姓氏筆畫排序)

于陽 方恒健 王志鷗 孫茂松 邱志杰

吳璽宏 楊麗 欒家 錢琦

外事統籌

陶倩

工作委員會(按姓氏筆畫排序)

于海波 馬軍 王曉慶 王雪瑩 王文瀟 盧迪 劉家豐 孫宇明

李茜茜 張淵 張昕然 谷美蓮 周晴雯 周麟一 周昊天 趙藝璇 柴扉 高妍

志愿者(按姓氏筆畫排序)

卜禹翔 亓佳寧 王楚旖 王文楚 王茜 王紫 王鑫琛 馮子驁 丘悅欣 劉俊汝 劉毅 劉恩洋 孫靜茹 許玥童暉 蘆樂妍 肖翔 楊佳一 楊婷絮 張博 陳菲 李小寧 李頔 李思麒 何紫怡 金戈 林向彬 林雨聲 段晨 洪若希 趙雪丹妮 海納 龔頡蕓 黃千倪 黃都 黃文杰 隋林木 梁世杰 彭晨 藍善美 魏圣普


本文由CAAI藝術與人工智能專委會供稿

特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網易號”用戶上傳并發布,本平臺僅提供信息存儲服務。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

中國人工智能學會
中國人工智能學會
中國人工智能學會網易官方賬號
4036文章數 1489關注度
往期回顧 全部

專題推薦

洞天福地 花海畢節 山水饋贈里的“詩與遠方

無障礙瀏覽 進入關懷版