通知公告
通知公告
重要通知

日程 | 人工智能与几何研讨会听众报名开启

      随着大数据、深度学习等技术的发展,人工智能已经深刻影响我们这个世界。人工智能与几何有着深刻的联系。很多人工智能模型需要考虑数据在空间的几何分布,或是数据内在的几何结构。

      人工智能与几何研讨会是由科技部重点研发计划项目资助的学术交流活动,首届将于2023年10月15日举办,由排列三走势图姜少峰老师和中国科学技术大学计算机学院丁虎老师组织,邀请多位在理论算法、计算几何、人工智能等不同领域专家做学术报告,希望成为一个多个不同领域学者、同学合作、交流的平台。本届研讨会还特别与中国计算机协会 CCF 理论专委合办,是“CCF 走进高校”系列活动的一部分。

 

活动日程

 

互动听众报名

↑↑扫小程序码报名↑↑

前20位通过小程序报名的观众将有机会与讲者面对面互动,并获得免费午餐。

报名成功者将于10月11日报名截止后收到通知邮件。
*通知邮件非入校预约凭证,校外听众入校事宜请自行解决。

 

特邀嘉宾

 

孙晓明

中科院计算所研究员,国家杰青

 

陈卫

微软亚院首席研究员,IEEE Fellow

 

张家琳

中科院计算所研究员

 

报告信息

按报告时间排序


1. 脑显微图像的三维结构重建和分析

Abstract

近年来,微米、纳米分辨率的显微成像技术支持对大脑进行神经元级别到突触级的观测,大力推动了脑科学的发展,然而,TB 级甚至 PB 级脑显微影像数据的处理和分析成为了制约神经科学发展的瓶颈。利用人工智能技术,对脑显微图像中神经元、线粒体等亚细胞结构进行自动三维重建和形态分析成为了必要手段。然而,脑显微图像中的三维目标形态各异,结构精细,标注代价及其高昂。本次报告将介绍我们研究的一系列半监督、自监督学习方法,利用神经元全局结构先验,挖掘图像样本间的结构关联,实现无需人工标注的高精度三维重建和三维形态表达。

Biography


陈雪锦,现为中国科学技术大学信息学院特任教授,教育部“青年长江学者”。于中国科学技术大学获得学士、博士学位,耶鲁大学计算机系博士后。曾在斯坦福大学、微软亚洲研究院任访问学者。主要研究方向为计算机图形学、三维视觉、脑显微图像分析。在 ACM SIGGRAPH、IEEE TVCG、TMI、ACM Multimedia 等期刊会议上发表学术论文70余篇,承担国家科研项目10余项,曾获中国图象图形学学会“石青云女科学家”奖、GDC 2022会议最佳论文、CVM 期刊2019年度最佳论文提名、国家教学成果二等奖、安徽省教学成果特等奖等奖励。

 

2. Coresets for clustering in Euclidean spaces

Abstract

Clustering, a fundamental problem in machine learning, aims to identify a subset of k centers that minimizes the overall distance between data points and this subset. Coreset, serving as a compact summary of the data, has been extensively researched for data reduction in clustering. In this presentation, I will discuss traditional techniques as well as recent advancements in constructing coresets for clustering in Euclidean spaces. These approaches leverage intriguing geometric insights and probabilistic methods to achieve improved results.

Biography

黄棱潇,南京大学副教授,博士生导师。他本科与博士毕业于清华大学交叉信息研究院,并先后在瑞士洛桑联邦理工(EPFL)、耶鲁大学担任博士后,以及在上海华为理论计算机实验室担任高级研究员。他的研究领域为理论计算机科学,主要研究兴趣包括:数据压缩、算法公平性与机器学习理论,入选国家青年高层次人才。论文陆续发表于理论计算机科学国际一流会议(STOC/FOCS/SODA/ICALP)和人工智能国际一流会议(ICML/NeurIPS/ICLR/IJCAI)。

 

3. Diffusion and Consistency Models in Latent Spaces

Abstract

In this talk, I will give a brief introduction to diffusion models, which have achieved remarkable results in a variety of AIGC tasks. However, a major issue of diffusion models is its sampling efficiency. Its iterative sampling process requires many steps, hence being very computationally intensive. Next, I will introduce our recent work on Latent Consistency Models (LCMs), which leverages the concept of consistency models proposed by Song et al. LCM enables efficient inference with a a few steps on any pre-trained diffusion models, including Stable Diffusion, and generates very high quality images. One can efficiently distill a latent consistency model from pre-trained classifier-free guided diffusion models, a high-quality 768×768 2~4-step LCM takes only 32 A100 GPU hours for training.

 

Furthermore, we introduce Latent Consistency Fine-tuning (LCF), a novel method that is tailored for fine-tuning LCMs on customized image datasets. Evaluation on the LAION-5B-Aesthetics dataset demonstrates that LCMs achieve state-of-the-art text-to-image generation performance with few-step inference.

Biography


李建目前是清华大学交叉信息研究院长聘副教授,博士生导师。他在中山大学取得的学士学位和复旦大学取得的硕士学位,马里兰大学博士毕业。他的研究兴趣主要包括算法设计与分析,机器学习,数据库,金融科技。他已经在主流国际会议和杂志上发表了100余篇论文等,并获得了 VLDB 2009和 ESA 2010的最佳论文奖,ICDT 2017最佳新人奖,清华221基础研究青年人才支持计划,教育部新世纪人才支持计划,国家自然科学基金优秀青年基金。他主持并参与了多项科研项目,包括自然科学基金青年基金,面上项目,中以国际合作项目,青年973计划等,以及多个企业级合作项目,包括蚂蚁金服、华泰证券、易方达、微软、百度、滴滴等。

 

4. When and Why Momentum Accelerates SGD?

Abstract

Momentum has become a crucial component in deep learning optimizers, necessitating a comprehensive understanding of when and why it accelerates stochastic gradient descent (SGD). To address the question of "when", we establish a meaningful comparison framework that examines the performance of SGD with Momentum (SGDM) under the effective learning rates, a notion unifying the influence of momentum coefficient mu and batch size b over learning rate eta. For the question of "why", we find that the momentum acceleration is closely related to abrupt sharpening which is to describe a sudden jump of the directional Hessian along the update direction. Specifically, the misalignment between SGD and SGDM happens at the same moment that SGD experiences abrupt sharpening and converges slower. Momentum improves the performance of SGDM by preventing or deferring the occurrence of abrupt sharpening. Together, this study unveils the interplay between momentum, learning rates, and batch sizes, thus improving our understanding of momentum acceleration.

Biography


张辉帅,微软亚洲研究院首席研究员,于2017年从 Syracuse University 毕业获得博士学位。主要从事深度学习理论与算法,隐私保护机器学习,差分隐私方面的研究。研究成果发表于 ICML, NeurIPS, ICLR, JMLR, TIT 等会议及期刊数十篇。

 

5. On the Relative Error of Random Fourier Features for Preserving Kernel Distance

Abstract

The method of random Fourier features (RFF), proposed in a seminal paper by Rahimi and Recht (NIPS'07), is a powerful technique to find approximate low-dimensional representations of points in (high-dimensional) kernel space, for shift-invariant kernels. While RFF has been analyzed under various notions of error guarantee, the ability to preserve the kernel distance with relative error is less understood. We show that for a significant range of kernels, including the well-known Laplacian kernels, RFF cannot approximate the kernel distance with small relative error using low dimensions. We complement this by showing as long as the shift-invariant kernel is analytic, RFF with poly(ϵ−1log n) dimensions achieves ϵ-relative error for pairwise kernel distance of n points, and the dimension bound is improved to poly(ϵ−1 log k) for the specific application of kernel k-means. Finally, going beyond RFF, we make the first step towards data-oblivious dimension-reduction for general shift-invariant kernels, and we obtain a similar poly(ϵ−1log n) dimension bound for Laplacian kernels.
The talk is based on a joint work with Shaofeng H.-C. Jiang, Luojian Wei, Zhide Wei.

Biography


Kuan Cheng is an assistant professor at Peking University, Computer Science Department. Previously he was a postdoc at UT Austin. He received his PhD from Johns Hopkins University in 2019. His research interests mainly include randomness in computation and coding theory.

 

6. 新型鲁棒多核聚类算法

Abstract

本次报告将介绍本课题组最近提出的 SimpleMKKM 融合聚类框架及其相关拓展。首先,区别于常用的 min-min/max-max 聚类算法,我们提出了一个全新的 min-max 模型,并设计了新的求解算法,保证了得到的解具有全局最优性。该模型在不同应用中展示了优越的聚类性能,且不含任何超参数。接着,我们采用核矩阵局部对齐的思想对其进行了拓展,提出了 Localized SimpleMKKM 算法。其次,我们进一步提出了一种无参的样本自适应 Localized SimpleMKKM 算法。
代码开源于 https://xinwangliu.github.io/

Biography


刘新旺,国防科技大学计算机学院教授,博士生导师。国家杰青、优青获得者。主要研究兴趣包括机器学习、数据挖掘等。近五年以第一或通讯作者在 CCF A 类顶刊和顶会上发表论文70余篇,包括 IEEE TPAMI 论文10篇,含3篇独立作者。ESI 高被引论文12篇。谷歌学术引用一万余次,入选2022年度全球2%顶尖科学家榜单。担任 IEEE TNNLS、IEEE TCYB、Information Fusion 等期刊 AE 及 ICML、NeurIPS 等顶会的资深程序委员/领域主席。部分研究成果曾两次获得湖南省自然科学一等奖(2/6、6/6)。

 

7. The Moments of Orientation Estimations under Molecular symmetry in cryo-EM

Abstract
Cryogenic electron microscopy (cryo-EM) is a powerful tool in structure biology. The symmetry inherent in macromolecules is beneficial, as it allows each transmission image to correspond to multiple perspectives. However, data processing incorporating symmetry can inadvertently average out asymmetric features. In this talk, we introduce a novel method for estimating the mean and variance of orientations with molecular symmetry. Utilizing tools from non-unique games, we show that our proposed non-convex formulation can be simplified as a semi-definite programming problem. Moreover, we propose a rounding procedure to determine the representative values. Experimental results demonstrate that the proposed approach can find the global minima and the appropriate representatives with a high degree of probability. We release the code of our method as an open-source Python package named pySymStat. Finally, we apply pySymStat to visualize an asymmetric feature in an icosahedral virus.

Biography


包承龙,清华大学丘成桐数学科学排列三走势图助理教授。2014年博士毕业于新加坡国立大学数学系,2015年至2018年在新加坡国立大学数学系进行博士后研究。其研究兴趣主要在数学图像处理的模型与算法方面,目前已在各类顶尖期刊和会议上共计发表学术论文40余篇。

 


共同发起人

以姓氏首字母排序

 

丁虎
中国科学技术大学计算机学院教授
研究方向:计算几何、优化算法、机器学习

 

姜少峰
 排列三走势图助理教授

研究方向:理论计算机科学、大数据上的算法、近似算法、在线算法

 

主办单位

 

联合主办

 

协办单位