1·Although the sparseness of the data may suggest that the social network is not always applicable, a solution to utilize the network in these cases is presented.
数据的稀疏性意味着社会网络并不总是可用,在这种情况下提出一种解决方案,很好地利用了社会网络的有效信息。
2·Using pseudowords we can overcome data sparseness problem in supervised WSD and fully verify the experimental effect of word sense classifier.
使用伪词可以避免有指导的词义消歧方法中的数据稀疏问题,充分验证词义分类器的实验效果。
3·Compared with the classical Support Vector Machines, the Least Squares Support Vector Machines lose the sparseness, which would influence the efficiency of re-learning.
最小二乘支持向量机相比传统的支持向量机,丧失了解的稀疏性,影响了二次学习的效率。
4·Based on a rank-1 update, we propose sparse Bayesian Learning Algorithm (SBLA), which has low complexity and high sparseness, thus being very suitable for large-scale problems.
基于秩- 1更新,提出了稀疏贝叶斯学习算法(SBLA)。该算法具有较低的计算复杂度和较高的稀疏性,从而适合于求解大规模问题。
5·The data sets have features such as high-dimensional, sparseness and binary value in many clustering applications.
在许多聚类应用中,数据对象是具有高维、稀疏、二元的特征。