Tested model: ============= ------- notif_encoder: NotifEncoderSimple( (model): Linear(in_features=29, out_features=32, bias=True) ) ------- evnt_encoder: EvntEncoderSimple( (model): Linear(in_features=11, out_features=32, bias=True) ) ------- info_encoder: InfoEncoder( (model): Sequential( (0): Linear(in_features=28, out_features=64, bias=True) (1): PReLU(num_parameters=1) (2): Linear(in_features=64, out_features=32, bias=True) (3): Dropout(p=0.2, inplace=False) ) ) ------- embedder: ConvEmbedderSimple( (adapter): Linear(in_features=32, out_features=32, bias=True) (representer): ConvRepresenter( (convs): ParameterList( (0): Object of type: ConvRepresenterBlock (1): Object of type: ConvRepresenterBlock (0): ConvRepresenterBlock( (model): Sequential( (0): Conv1d(32, 64, kernel_size=(3,), stride=(1,), padding=(1,)) (1): PReLU(num_parameters=1) (2): Conv1d(64, 32, kernel_size=(3,), stride=(1,), padding=(1,)) (3): PReLU(num_parameters=1) ) ) (1): ConvRepresenterBlock( (model): Sequential( (0): Conv1d(32, 64, kernel_size=(3,), stride=(1,), padding=(1,)) (1): PReLU(num_parameters=1) (2): Conv1d(64, 32, kernel_size=(3,), stride=(1,), padding=(1,)) (3): PReLU(num_parameters=1) ) ) ) ) (final): Conv1d(32, 32, kernel_size=(22,), stride=(1,)) ) (Training dataset class occurences: {3: 176, 4: 45, 1: 46, 2: 1}.) svm finished: True; SV cnt 93; Trained SVM with LINEAR kernel with loss 887950.0087727046 and 93 outliers. svm finished: True; SV cnt 86; Trained SVM with POLYNOMIAL (deg 2) kernel with loss 843941.6123241688 and 86 outliers. svm finished: True; SV cnt 91; Trained SVM with GAUSSIAN (sigma 1) kernel with loss 843941.6123241688 and 86 outliers. Results of classification of specially annotated notifications: IDs of clssified notificaions: [9961382, 9962202, 9971171, 10168557, 10213020, 10275769, 11168292, 11169523, 11173814] SVM with LINEAR kernel: tensor([ 1., 1., 1., 1., 1., 1., 1., -1., 1.], dtype=torch.float64) SVM with POLYNOMIAL kernel: tensor([ 1., 1., 1., 1., 1., 1., 1., -1., 1.], dtype=torch.float64) SVM with GAUSSIAN kernel: tensor([ 1., 1., 1., 1., 1., 1., 1., -1., 1.], dtype=torch.float64) logistic regression: tensor([[1.0000e+00, 8.2416e-28], [1.0000e+00, 3.0385e-35], [1.0000e+00, 4.9033e-14], [1.0000e+00, 0.0000e+00], [1.7237e-23, 1.0000e+00], [3.4791e-19, 1.0000e+00], [1.0000e+00, 0.0000e+00], [0.0000e+00, 1.0000e+00], [1.0000e+00, 0.0000e+00]], grad_fn=) MLP classifier: tensor([[1.0000e+00, 0.0000e+00], [1.0000e+00, 0.0000e+00], [1.0000e+00, 0.0000e+00], [1.0000e+00, 0.0000e+00], [3.1970e-11, 1.0000e+00], [4.0606e-19, 1.0000e+00], [1.0000e+00, 0.0000e+00], [0.0000e+00, 1.0000e+00], [1.0000e+00, 0.0000e+00]], grad_fn=) Results on test set: SVM with LINEAR kernel: 1.0 (13/13) SVM with POLYNOMIAL kernel: 1.0 (13/13) SVM with GAUSSIAN kernel: 1.0 (13/13) logistic regression: 1.0 (13/13) MLP classifier: 1.0 (13/13) (class occurence counts: {3: 11, 4: 2})