#979 Add pca lowrank

Merged
laich merged 2 commits from Add_pca_master into master 2 months ago
laich commented 3 months ago
hanjr commented 3 months ago
Collaborator
``` def get_floating_dtype(A): + """Return the floating point dtype of tensor A. + + Integer types map to float32. + """ + dtype = A.dtype + if dtype in (float16, float32, float64): + return dtype + return float32 ``` 第一个判断可以不用判断是不是float32。 添加的svd_lowrank、get_approximate_basis、pca_lowrank需要有测试样例和torch对比,只需要输入相同tensor就行,不需要在ci里load mnist数据集。
hanjr commented 3 months ago
Collaborator
``` for i,(x, y) in enumerate(data_train_loader): + x = torch.squeeze(x) + + print(x.shape) + # pca + v3 = [] + for i in range(len(x)): + v3.append(torch.pca_lowrank(x[i], q=3)[1].numpy()) # 3dim + v2 = [] + for i in range(len(x)): + v2.append(torch.pca_lowrank(x[i], q=2)[1].numpy()) # 2dim + print(v2) + + ## You can open these comments when running offline + # show(v2, y) + # show3d(v3, y) + break ``` 这循环测试是否是必须的,应该只需要测一次就行,这个测试需要有和torch输出结果的对比。
laich commented 3 months ago
Poster
> ``` > for i,(x, y) in enumerate(data_train_loader): > + x = torch.squeeze(x) > + > + print(x.shape) > + # pca > + v3 = [] > + for i in range(len(x)): > + v3.append(torch.pca_lowrank(x[i], q=3)[1].numpy()) # 3dim > + v2 = [] > + for i in range(len(x)): > + v2.append(torch.pca_lowrank(x[i], q=2)[1].numpy()) # 2dim > + print(v2) > + > + ## You can open these comments when running offline > + # show(v2, y) > + # show3d(v3, y) > + break > ``` > 这循环测试是否是必须的,应该只需要测一次就行,这个测试需要有和torch输出结果的对比。 跑一次break,和torch对比只能做到整数对齐,散点图pca功能正确
laich commented 3 months ago
Poster
> ``` > def get_floating_dtype(A): > + """Return the floating point dtype of tensor A. > + > + Integer types map to float32. > + """ > + dtype = A.dtype > + if dtype in (float16, float32, float64): > + return dtype > + return float32 > ``` > 第一个判断可以不用判断是不是float32。 > > > 添加的svd_lowrank、get_approximate_basis、pca_lowrank需要有测试样例和torch对比,只需要输入相同tensor就行,不需要在ci里load mnist数据集。 float32可去掉不影响,pca_lowrank测试可以覆盖其他函数。load mnist为了画图验证功能
laich merged commit 6fa60d4a88 into master 2 months ago
The pull request has been merged as 6fa60d4a88.
Sign in to join this conversation.
No reviewers
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Dependencies

This pull request currently doesn't have any dependencies.

Loading…
There is no content yet.