1. Introduction
2. Quantum measures
3. Predicting entanglement by machine learning models
Table 1. Comparison between the different machine learning models. |
Model | Applicable scenarios | Advantages | Disadvantages |
---|---|---|---|
Linear Regression | There is a linear relationship between features and the target. | Simple computation; strong interpretability. | Limited ability to handle nonlinear relationships and sensitive to outliers. |
KNN | The distance metric in the feature space is meaningful | Simple and intuitive, no training process required | High computational cost; sensitive to feature scaling. |
Bagging | Reducing the variance of the model and improving stability. | Reduce overfitting risk and improve prediction accuracy | May not significantly improve prediction accuracy |
Boosting | Converting weak learners into strong learners | Improve prediction accuracy; robust to outliers and noisy data. | Sensitive to the choice of base learners |
Stacking | There is significant diversity and complementarity between base learners | Leverage the complementarity of base learners, improving prediction accuracy. | High computational complexity; sensitive to the choice of base learners. |
Table 2. The RMSEs of predicting the coherent information by the different learning methods. |
Methods | RMSE |
---|---|
Linear regression | 0.020 |
KNN | 0.051 |
Bagging | 0.105 |
Random forest | 0.103 |
AdaBoost | 0.134 |
XGBoost | 0.038 |
Stacking | 0.019 |
NN | 0.159 |
Figure 1. The Pearson correlation of input data features. |
3.1. different dimensional quantum states
Table 3. The RMSEs of predicting the coherent information by different learning methods and different dimensional quantum states. |
Methods | RMSE($\rho \in { \mathcal B }({{ \mathcal H }}_{2}\otimes {{ \mathcal H }}_{2})$) | RMSE($\rho \in { \mathcal B }({{ \mathcal H }}_{3}\otimes {{ \mathcal H }}_{3})$) |
---|---|---|
Linear regression | 0.020 | 0.015 |
KNN | 0.051 | 0.040 |
Bagging | 0.105 | 0.074 |
Random forest | 0.108 | 0.078 |
AdaBoost | 0.122 | 0.063 |
XGBoost | 0.038 | 0.030 |
Stacking | 0.020 | 0.015 |
NN | 0.159 | 0.076 |
3.2. Univariate linear regression
Table 4. The RMSEs of predicting the coherent information by the different features' performances. |
Data features | RMSE |
---|---|
p_11 | 0.12 |
p_12 | 0.16 |
p_21 | 0.13 |
p_22 | 0.16 |
miu_2 | 0.07 |
miu_3 | 0.09 |
vne_A | 0.15 |
vne_B | 0.13 |
f_c | 0.13 |