cola Report for Golub_leukemia

Date: 2019-12-26 18:30:06 CET, cola version: 1.3.2


Summary

First the variable is renamed to res_list.

res_list = rl

All available functions which can be applied to this res_list object:

res_list
#> A 'ConsensusPartitionList' object with 24 methods.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows are extracted by 'SD, CV, MAD, ATC' methods.
#>   Subgroups are detected by 'hclust, kmeans, skmeans, pam, mclust, NMF' method.
#>   Number of partitions are tried for k = 2, 3, 4, 5, 6.
#>   Performed in total 30000 partitions by row resampling.
#> 
#> Following methods can be applied to this 'ConsensusPartitionList' object:
#>  [1] "cola_report"           "collect_classes"       "collect_plots"         "collect_stats"        
#>  [5] "colnames"              "functional_enrichment" "get_anno_col"          "get_anno"             
#>  [9] "get_classes"           "get_matrix"            "get_membership"        "get_stats"            
#> [13] "is_best_k"             "is_stable_k"           "ncol"                  "nrow"                 
#> [17] "rownames"              "show"                  "suggest_best_k"        "test_to_known_factors"
#> [21] "top_rows_heatmap"      "top_rows_overlap"     
#> 
#> You can get result for a single method by, e.g. object["SD", "hclust"] or object["SD:hclust"]
#> or a subset of methods by object[c("SD", "CV")], c("hclust", "kmeans")]

The call of run_all_consensus_partition_methods() was:

#> run_all_consensus_partition_methods(data = m, mc.cores = 4, anno = anno, anno_col = anno_col)

Dimension of the input matrix:

mat = get_matrix(res_list)
dim(mat)
#> [1] 4116   72

Density distribution

The density distribution for each sample is visualized as in one column in the following heatmap. The clustering is based on the distance which is the Kolmogorov-Smirnov statistic between two distributions.

library(ComplexHeatmap)
densityHeatmap(mat, top_annotation = HeatmapAnnotation(df = get_anno(res_list), 
    col = get_anno_col(res_list)), ylab = "value", cluster_columns = TRUE, show_column_names = FALSE,
    mc.cores = 4)

plot of chunk density-heatmap

Suggest the best k

Folowing table shows the best k (number of partitions) for each combination of top-value methods and partition methods. Clicking on the method name in the table goes to the section for a single combination of methods.

The cola vignette explains the definition of the metrics used for determining the best number of partitions.

suggest_best_k(res_list)
The best k 1-PAC Mean silhouette Concordance Optional k
ATC:kmeans 2 1.000 0.982 0.992 **
ATC:NMF 2 0.972 0.927 0.973 **
ATC:skmeans 3 0.961 0.932 0.973 ** 2
MAD:skmeans 3 0.866 0.872 0.946
CV:skmeans 3 0.837 0.857 0.938
MAD:mclust 2 0.824 0.927 0.932
SD:skmeans 3 0.745 0.836 0.918
CV:kmeans 3 0.739 0.864 0.901
MAD:NMF 2 0.680 0.811 0.925
SD:kmeans 2 0.662 0.836 0.911
MAD:pam 2 0.651 0.824 0.925
SD:mclust 5 0.634 0.721 0.790
SD:NMF 2 0.633 0.823 0.928
CV:NMF 2 0.633 0.787 0.916
MAD:kmeans 2 0.623 0.805 0.904
ATC:pam 2 0.597 0.808 0.918
ATC:hclust 2 0.574 0.813 0.911
SD:pam 2 0.548 0.686 0.876
CV:pam 2 0.546 0.784 0.904
CV:mclust 2 0.489 0.908 0.895
ATC:mclust 2 0.360 0.711 0.828
SD:hclust 2 0.236 0.777 0.851
MAD:hclust 2 0.227 0.737 0.838
CV:hclust 2 0.214 0.570 0.805

**: 1-PAC > 0.95, *: 1-PAC > 0.9

CDF of consensus matrices

Cumulative distribution function curves of consensus matrix for all methods.

collect_plots(res_list, fun = plot_ecdf)

plot of chunk collect-plots

Consensus heatmap

Consensus heatmaps for all methods. (What is a consensus heatmap?)

collect_plots(res_list, k = 2, fun = consensus_heatmap, mc.cores = 4)

plot of chunk tab-collect-consensus-heatmap-1

Membership heatmap

Membership heatmaps for all methods. (What is a membership heatmap?)

collect_plots(res_list, k = 2, fun = membership_heatmap, mc.cores = 4)

plot of chunk tab-collect-membership-heatmap-1

Signature heatmap

Signature heatmaps for all methods. (What is a signature heatmap?)

Note in following heatmaps, rows are scaled.

collect_plots(res_list, k = 2, fun = get_signatures, mc.cores = 4)

plot of chunk tab-collect-get-signatures-1

Statistics table

The statistics used for measuring the stability of consensus partitioning. (How are they defined?)

get_stats(res_list, k = 2)
#>             k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> SD:NMF      2 0.633           0.823       0.927          0.489 0.512   0.512
#> CV:NMF      2 0.633           0.787       0.916          0.491 0.507   0.507
#> MAD:NMF     2 0.680           0.811       0.925          0.490 0.507   0.507
#> ATC:NMF     2 0.972           0.927       0.973          0.505 0.495   0.495
#> SD:skmeans  2 0.617           0.659       0.865          0.497 0.512   0.512
#> CV:skmeans  2 0.629           0.748       0.890          0.497 0.495   0.495
#> MAD:skmeans 2 0.640           0.728       0.884          0.497 0.493   0.493
#> ATC:skmeans 2 1.000           0.977       0.991          0.505 0.496   0.496
#> SD:mclust   2 0.373           0.801       0.818          0.410 0.518   0.518
#> CV:mclust   2 0.489           0.908       0.895          0.431 0.532   0.532
#> MAD:mclust  2 0.824           0.927       0.932          0.442 0.532   0.532
#> ATC:mclust  2 0.360           0.711       0.828          0.426 0.493   0.493
#> SD:kmeans   2 0.662           0.836       0.911          0.479 0.540   0.540
#> CV:kmeans   2 0.653           0.806       0.905          0.484 0.532   0.532
#> MAD:kmeans  2 0.623           0.805       0.904          0.483 0.540   0.540
#> ATC:kmeans  2 1.000           0.982       0.992          0.503 0.499   0.499
#> SD:pam      2 0.548           0.686       0.876          0.490 0.493   0.493
#> CV:pam      2 0.546           0.784       0.904          0.496 0.496   0.496
#> MAD:pam     2 0.651           0.824       0.925          0.496 0.499   0.499
#> ATC:pam     2 0.597           0.808       0.918          0.475 0.518   0.518
#> SD:hclust   2 0.236           0.777       0.851          0.435 0.559   0.559
#> CV:hclust   2 0.214           0.570       0.805          0.454 0.512   0.512
#> MAD:hclust  2 0.227           0.737       0.838          0.444 0.549   0.549
#> ATC:hclust  2 0.574           0.813       0.911          0.486 0.495   0.495

Following heatmap plots the partition for each combination of methods and the lightness correspond to the silhouette scores for samples in each method. On top the consensus subgroup is inferred from all methods by taking the mean silhouette scores as weight.

collect_stats(res_list, k = 2)

plot of chunk tab-collect-stats-from-consensus-partition-list-1

Partition from all methods

Collect partitions from all methods:

collect_classes(res_list, k = 2)

plot of chunk tab-collect-classes-from-consensus-partition-list-1

Top rows overlap

Overlap of top rows from different top-row methods:

top_rows_overlap(res_list, top_n = 412, method = "euler")

plot of chunk tab-top-rows-overlap-by-euler-1

Also visualize the correspondance of rankings between different top-row methods:

top_rows_overlap(res_list, top_n = 412, method = "correspondance")

plot of chunk tab-top-rows-overlap-by-correspondance-1

Heatmaps of the top rows:

top_rows_heatmap(res_list, top_n = 412)

plot of chunk tab-top-rows-heatmap-1

Test to known annotations

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res_list, k = 2)
#>              n ALL.AML(p) k
#> SD:NMF      66   1.35e-12 2
#> CV:NMF      63   7.82e-13 2
#> MAD:NMF     65   2.13e-12 2
#> ATC:NMF     69   5.41e-05 2
#> SD:skmeans  54   1.01e-11 2
#> CV:skmeans  55   3.77e-11 2
#> MAD:skmeans 67   2.39e-06 2
#> ATC:skmeans 71   5.23e-05 2
#> SD:mclust   70   2.94e-14 2
#> CV:mclust   70   2.94e-14 2
#> MAD:mclust  72   8.75e-14 2
#> ATC:mclust  62   1.26e-12 2
#> SD:kmeans   71   2.08e-14 2
#> CV:kmeans   66   2.15e-13 2
#> MAD:kmeans  65   3.43e-13 2
#> ATC:kmeans  72   1.50e-04 2
#> SD:pam      54   1.01e-11 2
#> CV:pam      65   1.04e-07 2
#> MAD:pam     63   2.89e-07 2
#> ATC:pam     67   1.65e-04 2
#> SD:hclust   70   4.70e-15 2
#> CV:hclust   49   1.25e-10 2
#> MAD:hclust  69   5.21e-14 2
#> ATC:hclust  66   4.16e-05 2

Results for each method


SD:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "hclust"]
# you can also extract it by
# res = res_list["SD:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'SD' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.236           0.777       0.851         0.4348 0.559   0.559
#> 3 3 0.245           0.570       0.676         0.4324 0.737   0.543
#> 4 4 0.388           0.570       0.714         0.1342 0.940   0.820
#> 5 5 0.526           0.612       0.703         0.0851 0.948   0.819
#> 6 6 0.591           0.587       0.716         0.0493 1.000   1.000

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.8763      0.742 0.296 0.704
#> sample_40     2  0.9866      0.489 0.432 0.568
#> sample_42     2  0.5519      0.808 0.128 0.872
#> sample_47     2  0.1843      0.804 0.028 0.972
#> sample_48     2  0.0000      0.793 0.000 1.000
#> sample_49     2  0.9358      0.682 0.352 0.648
#> sample_41     2  0.0000      0.793 0.000 1.000
#> sample_43     2  0.4562      0.810 0.096 0.904
#> sample_44     2  0.5294      0.821 0.120 0.880
#> sample_45     2  0.5294      0.821 0.120 0.880
#> sample_46     2  0.5519      0.820 0.128 0.872
#> sample_70     2  0.6048      0.818 0.148 0.852
#> sample_71     2  0.7602      0.759 0.220 0.780
#> sample_72     2  0.7602      0.759 0.220 0.780
#> sample_68     2  0.0000      0.793 0.000 1.000
#> sample_69     2  0.0000      0.793 0.000 1.000
#> sample_67     2  0.7883      0.702 0.236 0.764
#> sample_55     2  0.8713      0.744 0.292 0.708
#> sample_56     2  0.8861      0.737 0.304 0.696
#> sample_59     2  0.8443      0.766 0.272 0.728
#> sample_52     1  0.7139      0.781 0.804 0.196
#> sample_53     1  0.0376      0.866 0.996 0.004
#> sample_51     1  0.0000      0.865 1.000 0.000
#> sample_50     1  0.0000      0.865 1.000 0.000
#> sample_54     1  0.7219      0.774 0.800 0.200
#> sample_57     1  0.6973      0.789 0.812 0.188
#> sample_58     1  0.0000      0.865 1.000 0.000
#> sample_60     1  0.7219      0.774 0.800 0.200
#> sample_61     1  0.0672      0.868 0.992 0.008
#> sample_65     1  0.1184      0.869 0.984 0.016
#> sample_66     2  0.8016      0.607 0.244 0.756
#> sample_63     1  0.7139      0.781 0.804 0.196
#> sample_64     1  0.8763      0.560 0.704 0.296
#> sample_62     1  0.7139      0.781 0.804 0.196
#> sample_1      2  0.8555      0.753 0.280 0.720
#> sample_2      2  0.7453      0.727 0.212 0.788
#> sample_3      2  0.7299      0.801 0.204 0.796
#> sample_4      2  0.8499      0.757 0.276 0.724
#> sample_5      2  0.0000      0.793 0.000 1.000
#> sample_6      2  0.7299      0.801 0.204 0.796
#> sample_7      2  0.8555      0.754 0.280 0.720
#> sample_8      2  0.8661      0.750 0.288 0.712
#> sample_9      2  0.7299      0.801 0.204 0.796
#> sample_10     2  0.7299      0.801 0.204 0.796
#> sample_11     2  0.7299      0.801 0.204 0.796
#> sample_12     2  0.9522      0.640 0.372 0.628
#> sample_13     2  0.0000      0.793 0.000 1.000
#> sample_14     2  0.3274      0.803 0.060 0.940
#> sample_15     2  0.0000      0.793 0.000 1.000
#> sample_16     2  0.4161      0.819 0.084 0.916
#> sample_17     2  0.2948      0.800 0.052 0.948
#> sample_18     2  0.8081      0.791 0.248 0.752
#> sample_19     2  0.4161      0.819 0.084 0.916
#> sample_20     2  0.0000      0.793 0.000 1.000
#> sample_21     2  0.0000      0.793 0.000 1.000
#> sample_22     2  0.8861      0.737 0.304 0.696
#> sample_23     2  0.7299      0.801 0.204 0.796
#> sample_24     2  0.0000      0.793 0.000 1.000
#> sample_25     2  0.9044      0.725 0.320 0.680
#> sample_26     2  0.4298      0.820 0.088 0.912
#> sample_27     2  0.9358      0.682 0.352 0.648
#> sample_34     1  0.6247      0.814 0.844 0.156
#> sample_35     1  0.8555      0.606 0.720 0.280
#> sample_36     1  0.0672      0.867 0.992 0.008
#> sample_37     1  0.0376      0.867 0.996 0.004
#> sample_38     1  0.3584      0.853 0.932 0.068
#> sample_28     1  0.4022      0.850 0.920 0.080
#> sample_29     2  0.9248      0.365 0.340 0.660
#> sample_30     1  0.0672      0.867 0.992 0.008
#> sample_31     1  0.3584      0.863 0.932 0.068
#> sample_32     1  0.2948      0.867 0.948 0.052
#> sample_33     1  0.0000      0.865 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n ALL.AML(p) k
#> SD:hclust 70   4.70e-15 2
#> SD:hclust 61   5.68e-14 3
#> SD:hclust 51   3.35e-10 4
#> SD:hclust 55   2.43e-10 5
#> SD:hclust 48   9.44e-10 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "kmeans"]
# you can also extract it by
# res = res_list["SD:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'SD' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.662           0.836       0.911         0.4793 0.540   0.540
#> 3 3 0.686           0.822       0.870         0.3544 0.789   0.609
#> 4 4 0.580           0.653       0.780         0.1247 0.900   0.718
#> 5 5 0.656           0.530       0.683         0.0757 0.922   0.732
#> 6 6 0.710           0.652       0.782         0.0496 0.852   0.445

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.9286      0.614 0.344 0.656
#> sample_40     2  0.9522      0.575 0.372 0.628
#> sample_42     2  0.0376      0.858 0.004 0.996
#> sample_47     2  0.0376      0.858 0.004 0.996
#> sample_48     2  0.0376      0.858 0.004 0.996
#> sample_49     2  0.9552      0.568 0.376 0.624
#> sample_41     2  0.0376      0.858 0.004 0.996
#> sample_43     2  0.0376      0.858 0.004 0.996
#> sample_44     2  0.0376      0.858 0.004 0.996
#> sample_45     2  0.0376      0.858 0.004 0.996
#> sample_46     2  0.0376      0.858 0.004 0.996
#> sample_70     2  0.0672      0.857 0.008 0.992
#> sample_71     2  0.0000      0.856 0.000 1.000
#> sample_72     2  0.0000      0.856 0.000 1.000
#> sample_68     2  0.0376      0.858 0.004 0.996
#> sample_69     2  0.0376      0.858 0.004 0.996
#> sample_67     2  1.0000     -0.047 0.500 0.500
#> sample_55     2  0.9393      0.599 0.356 0.644
#> sample_56     2  0.9393      0.599 0.356 0.644
#> sample_59     2  0.0376      0.858 0.004 0.996
#> sample_52     1  0.0672      0.992 0.992 0.008
#> sample_53     1  0.0938      0.988 0.988 0.012
#> sample_51     1  0.0672      0.992 0.992 0.008
#> sample_50     1  0.0672      0.992 0.992 0.008
#> sample_54     1  0.4690      0.887 0.900 0.100
#> sample_57     1  0.0672      0.992 0.992 0.008
#> sample_58     1  0.0672      0.992 0.992 0.008
#> sample_60     1  0.0672      0.992 0.992 0.008
#> sample_61     1  0.0672      0.992 0.992 0.008
#> sample_65     1  0.0672      0.992 0.992 0.008
#> sample_66     2  0.2423      0.838 0.040 0.960
#> sample_63     1  0.0672      0.992 0.992 0.008
#> sample_64     1  0.0672      0.992 0.992 0.008
#> sample_62     1  0.0672      0.992 0.992 0.008
#> sample_1      2  0.0672      0.857 0.008 0.992
#> sample_2      2  0.6148      0.744 0.152 0.848
#> sample_3      2  0.9323      0.619 0.348 0.652
#> sample_4      2  0.0672      0.857 0.008 0.992
#> sample_5      2  0.0376      0.858 0.004 0.996
#> sample_6      2  0.9323      0.619 0.348 0.652
#> sample_7      2  0.9358      0.604 0.352 0.648
#> sample_8      2  0.9580      0.562 0.380 0.620
#> sample_9      2  0.1633      0.849 0.024 0.976
#> sample_10     2  0.9323      0.619 0.348 0.652
#> sample_11     2  0.1633      0.849 0.024 0.976
#> sample_12     1  0.0672      0.992 0.992 0.008
#> sample_13     2  0.0376      0.858 0.004 0.996
#> sample_14     2  0.1633      0.849 0.024 0.976
#> sample_15     2  0.0376      0.858 0.004 0.996
#> sample_16     2  0.0376      0.858 0.004 0.996
#> sample_17     2  0.1414      0.852 0.020 0.980
#> sample_18     2  0.5842      0.788 0.140 0.860
#> sample_19     2  0.0376      0.858 0.004 0.996
#> sample_20     2  0.0376      0.858 0.004 0.996
#> sample_21     2  0.0000      0.856 0.000 1.000
#> sample_22     2  0.9552      0.562 0.376 0.624
#> sample_23     2  0.9323      0.619 0.348 0.652
#> sample_24     2  0.0376      0.858 0.004 0.996
#> sample_25     2  0.8955      0.650 0.312 0.688
#> sample_26     2  0.0376      0.858 0.004 0.996
#> sample_27     2  0.9580      0.562 0.380 0.620
#> sample_34     1  0.0672      0.992 0.992 0.008
#> sample_35     1  0.0672      0.992 0.992 0.008
#> sample_36     1  0.0672      0.992 0.992 0.008
#> sample_37     1  0.0672      0.992 0.992 0.008
#> sample_38     1  0.0938      0.988 0.988 0.012
#> sample_28     1  0.0672      0.992 0.992 0.008
#> sample_29     1  0.3431      0.928 0.936 0.064
#> sample_30     1  0.0672      0.992 0.992 0.008
#> sample_31     1  0.0672      0.992 0.992 0.008
#> sample_32     1  0.0672      0.992 0.992 0.008
#> sample_33     1  0.0672      0.992 0.992 0.008

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n ALL.AML(p) k
#> SD:kmeans 71   2.08e-14 2
#> SD:kmeans 66   2.72e-13 3
#> SD:kmeans 63   9.95e-13 4
#> SD:kmeans 50   7.99e-11 5
#> SD:kmeans 56   5.63e-10 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "skmeans"]
# you can also extract it by
# res = res_list["SD:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'SD' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.617           0.659       0.865         0.4970 0.512   0.512
#> 3 3 0.745           0.836       0.918         0.3352 0.763   0.560
#> 4 4 0.718           0.768       0.857         0.1107 0.913   0.745
#> 5 5 0.653           0.551       0.750         0.0682 0.929   0.750
#> 6 6 0.673           0.561       0.755         0.0479 0.922   0.675

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.1184    0.28974 0.016 0.984
#> sample_40     2  0.2236    0.20871 0.036 0.964
#> sample_42     2  0.9996    0.79587 0.488 0.512
#> sample_47     2  0.9996    0.79587 0.488 0.512
#> sample_48     2  0.9996    0.79587 0.488 0.512
#> sample_49     2  0.3274    0.14651 0.060 0.940
#> sample_41     2  0.9996    0.79587 0.488 0.512
#> sample_43     2  0.9996    0.79587 0.488 0.512
#> sample_44     2  0.9996    0.79587 0.488 0.512
#> sample_45     2  0.9996    0.79587 0.488 0.512
#> sample_46     2  0.9996    0.79587 0.488 0.512
#> sample_70     2  0.9996    0.79587 0.488 0.512
#> sample_71     2  0.9996    0.79587 0.488 0.512
#> sample_72     2  0.9996    0.79587 0.488 0.512
#> sample_68     2  0.9996    0.79587 0.488 0.512
#> sample_69     2  0.9996    0.79587 0.488 0.512
#> sample_67     1  0.0376    0.11499 0.996 0.004
#> sample_55     2  0.1184    0.25566 0.016 0.984
#> sample_56     2  0.0000    0.28969 0.000 1.000
#> sample_59     2  0.9996    0.79587 0.488 0.512
#> sample_52     1  0.9996    0.87667 0.512 0.488
#> sample_53     1  0.9996    0.87667 0.512 0.488
#> sample_51     1  0.9996    0.87667 0.512 0.488
#> sample_50     1  0.9996    0.87667 0.512 0.488
#> sample_54     1  0.8713    0.65021 0.708 0.292
#> sample_57     1  0.9996    0.87667 0.512 0.488
#> sample_58     1  0.9996    0.87667 0.512 0.488
#> sample_60     1  0.9996    0.87667 0.512 0.488
#> sample_61     1  0.9996    0.87667 0.512 0.488
#> sample_65     1  0.9996    0.87667 0.512 0.488
#> sample_66     1  0.3733   -0.07760 0.928 0.072
#> sample_63     1  0.9996    0.87667 0.512 0.488
#> sample_64     1  0.9996    0.87667 0.512 0.488
#> sample_62     1  0.9996    0.87667 0.512 0.488
#> sample_1      2  0.9996    0.79587 0.488 0.512
#> sample_2      1  0.2603   -0.00294 0.956 0.044
#> sample_3      2  0.1633    0.33281 0.024 0.976
#> sample_4      2  0.9996    0.79587 0.488 0.512
#> sample_5      2  0.9996    0.79587 0.488 0.512
#> sample_6      2  0.1633    0.33281 0.024 0.976
#> sample_7      2  0.1414    0.31215 0.020 0.980
#> sample_8      2  0.3584    0.12279 0.068 0.932
#> sample_9      2  0.9996    0.79587 0.488 0.512
#> sample_10     2  0.5629    0.03736 0.132 0.868
#> sample_11     2  0.9996    0.79587 0.488 0.512
#> sample_12     1  0.9996    0.87667 0.512 0.488
#> sample_13     2  0.9996    0.79587 0.488 0.512
#> sample_14     2  0.9996    0.79587 0.488 0.512
#> sample_15     2  0.9996    0.79587 0.488 0.512
#> sample_16     2  0.9996    0.79587 0.488 0.512
#> sample_17     1  0.8555   -0.53149 0.720 0.280
#> sample_18     2  0.9963    0.78087 0.464 0.536
#> sample_19     2  0.9996    0.79587 0.488 0.512
#> sample_20     2  0.9996    0.79587 0.488 0.512
#> sample_21     2  0.9996    0.79587 0.488 0.512
#> sample_22     2  0.3733    0.10999 0.072 0.928
#> sample_23     2  0.1633    0.33281 0.024 0.976
#> sample_24     2  0.9996    0.79587 0.488 0.512
#> sample_25     2  0.7376    0.47066 0.208 0.792
#> sample_26     2  0.9996    0.79587 0.488 0.512
#> sample_27     2  0.3274    0.14651 0.060 0.940
#> sample_34     1  0.9996    0.87667 0.512 0.488
#> sample_35     1  0.9996    0.87667 0.512 0.488
#> sample_36     1  0.9996    0.87667 0.512 0.488
#> sample_37     1  0.9996    0.87667 0.512 0.488
#> sample_38     1  0.9996    0.87667 0.512 0.488
#> sample_28     1  0.9996    0.87667 0.512 0.488
#> sample_29     1  0.9087    0.69508 0.676 0.324
#> sample_30     1  0.9996    0.87667 0.512 0.488
#> sample_31     1  0.9996    0.87667 0.512 0.488
#> sample_32     1  0.9996    0.87667 0.512 0.488
#> sample_33     1  0.9996    0.87667 0.512 0.488

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> SD:skmeans 54   1.01e-11 2
#> SD:skmeans 65   4.40e-13 3
#> SD:skmeans 64   6.16e-13 4
#> SD:skmeans 51   4.27e-09 5
#> SD:skmeans 48   2.14e-08 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "pam"]
# you can also extract it by
# res = res_list["SD:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'SD' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.548           0.686       0.876         0.4897 0.493   0.493
#> 3 3 0.427           0.654       0.797         0.3349 0.748   0.531
#> 4 4 0.648           0.737       0.840         0.1178 0.912   0.747
#> 5 5 0.615           0.575       0.770         0.0794 0.889   0.631
#> 6 6 0.651           0.449       0.724         0.0473 0.878   0.524

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.8713     0.5351 0.292 0.708
#> sample_40     1  0.9732     0.3900 0.596 0.404
#> sample_42     2  0.3274     0.8284 0.060 0.940
#> sample_47     2  0.0000     0.8616 0.000 1.000
#> sample_48     2  0.0000     0.8616 0.000 1.000
#> sample_49     1  0.9795     0.3574 0.584 0.416
#> sample_41     2  0.0000     0.8616 0.000 1.000
#> sample_43     2  0.0000     0.8616 0.000 1.000
#> sample_44     2  0.0000     0.8616 0.000 1.000
#> sample_45     2  0.0000     0.8616 0.000 1.000
#> sample_46     2  0.0000     0.8616 0.000 1.000
#> sample_70     2  0.9944     0.0442 0.456 0.544
#> sample_71     2  0.3733     0.8221 0.072 0.928
#> sample_72     2  0.1414     0.8525 0.020 0.980
#> sample_68     2  0.0000     0.8616 0.000 1.000
#> sample_69     2  0.0000     0.8616 0.000 1.000
#> sample_67     1  0.9608     0.2700 0.616 0.384
#> sample_55     1  0.9833     0.3459 0.576 0.424
#> sample_56     2  0.8608     0.5360 0.284 0.716
#> sample_59     2  0.3879     0.8111 0.076 0.924
#> sample_52     1  0.0000     0.8212 1.000 0.000
#> sample_53     1  0.0000     0.8212 1.000 0.000
#> sample_51     1  0.0000     0.8212 1.000 0.000
#> sample_50     1  0.0000     0.8212 1.000 0.000
#> sample_54     1  0.5842     0.7295 0.860 0.140
#> sample_57     1  0.0000     0.8212 1.000 0.000
#> sample_58     1  0.0000     0.8212 1.000 0.000
#> sample_60     1  0.2948     0.8004 0.948 0.052
#> sample_61     1  0.0376     0.8201 0.996 0.004
#> sample_65     1  0.0000     0.8212 1.000 0.000
#> sample_66     2  0.9970     0.0956 0.468 0.532
#> sample_63     1  0.0000     0.8212 1.000 0.000
#> sample_64     1  0.6247     0.7235 0.844 0.156
#> sample_62     1  0.2236     0.8061 0.964 0.036
#> sample_1      2  0.0000     0.8616 0.000 1.000
#> sample_2      2  0.9963     0.1369 0.464 0.536
#> sample_3      1  0.9815     0.3498 0.580 0.420
#> sample_4      2  0.9775     0.1889 0.412 0.588
#> sample_5      2  0.0000     0.8616 0.000 1.000
#> sample_6      1  0.9635     0.4212 0.612 0.388
#> sample_7      1  0.9944     0.2579 0.544 0.456
#> sample_8      2  0.9983    -0.0322 0.476 0.524
#> sample_9      2  0.9248     0.4058 0.340 0.660
#> sample_10     1  0.9580     0.4254 0.620 0.380
#> sample_11     2  0.2603     0.8416 0.044 0.956
#> sample_12     1  0.0938     0.8173 0.988 0.012
#> sample_13     2  0.0000     0.8616 0.000 1.000
#> sample_14     2  0.1633     0.8502 0.024 0.976
#> sample_15     2  0.0000     0.8616 0.000 1.000
#> sample_16     2  0.0000     0.8616 0.000 1.000
#> sample_17     2  0.0000     0.8616 0.000 1.000
#> sample_18     2  0.3584     0.8267 0.068 0.932
#> sample_19     2  0.0000     0.8616 0.000 1.000
#> sample_20     2  0.0000     0.8616 0.000 1.000
#> sample_21     2  0.0000     0.8616 0.000 1.000
#> sample_22     1  0.9922     0.2323 0.552 0.448
#> sample_23     1  0.9815     0.3513 0.580 0.420
#> sample_24     2  0.0000     0.8616 0.000 1.000
#> sample_25     2  0.9427     0.3790 0.360 0.640
#> sample_26     2  0.0376     0.8599 0.004 0.996
#> sample_27     1  0.9635     0.4182 0.612 0.388
#> sample_34     1  0.0000     0.8212 1.000 0.000
#> sample_35     1  0.4298     0.7750 0.912 0.088
#> sample_36     1  0.0000     0.8212 1.000 0.000
#> sample_37     1  0.0000     0.8212 1.000 0.000
#> sample_38     1  0.0000     0.8212 1.000 0.000
#> sample_28     1  0.0000     0.8212 1.000 0.000
#> sample_29     1  0.0000     0.8212 1.000 0.000
#> sample_30     1  0.0000     0.8212 1.000 0.000
#> sample_31     1  0.0000     0.8212 1.000 0.000
#> sample_32     1  0.0000     0.8212 1.000 0.000
#> sample_33     1  0.0000     0.8212 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n ALL.AML(p) k
#> SD:pam 54   1.01e-11 2
#> SD:pam 58   1.94e-12 3
#> SD:pam 63   6.74e-12 4
#> SD:pam 45   1.81e-08 5
#> SD:pam 31   3.96e-01 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "mclust"]
# you can also extract it by
# res = res_list["SD:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'SD' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 5.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.373           0.801       0.818         0.4100 0.518   0.518
#> 3 3 0.518           0.704       0.795         0.4537 0.806   0.654
#> 4 4 0.414           0.463       0.587         0.1442 0.784   0.560
#> 5 5 0.634           0.721       0.790         0.1525 0.761   0.398
#> 6 6 0.694           0.612       0.778         0.0494 0.940   0.718

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 5

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.8813   8.30e-01 0.300 0.700
#> sample_40     2  0.8661   8.32e-01 0.288 0.712
#> sample_42     2  0.9393   8.03e-01 0.356 0.644
#> sample_47     2  0.7602   8.09e-01 0.220 0.780
#> sample_48     2  0.1184   6.70e-01 0.016 0.984
#> sample_49     2  0.8661   8.32e-01 0.288 0.712
#> sample_41     2  0.1184   6.70e-01 0.016 0.984
#> sample_43     2  0.8443   8.30e-01 0.272 0.728
#> sample_44     2  0.8144   8.24e-01 0.252 0.748
#> sample_45     2  0.7602   8.09e-01 0.220 0.780
#> sample_46     2  0.7745   8.14e-01 0.228 0.772
#> sample_70     2  0.8909   8.30e-01 0.308 0.692
#> sample_71     2  0.9393   8.02e-01 0.356 0.644
#> sample_72     2  0.9286   8.11e-01 0.344 0.656
#> sample_68     2  0.1184   6.70e-01 0.016 0.984
#> sample_69     2  0.1184   6.70e-01 0.016 0.984
#> sample_67     1  0.4298   8.40e-01 0.912 0.088
#> sample_55     2  0.9248   8.09e-01 0.340 0.660
#> sample_56     2  0.8661   8.32e-01 0.288 0.712
#> sample_59     2  0.8955   8.29e-01 0.312 0.688
#> sample_52     1  0.0376   9.35e-01 0.996 0.004
#> sample_53     1  0.0000   9.35e-01 1.000 0.000
#> sample_51     1  0.0000   9.35e-01 1.000 0.000
#> sample_50     1  0.0000   9.35e-01 1.000 0.000
#> sample_54     1  0.2948   8.97e-01 0.948 0.052
#> sample_57     1  0.0376   9.35e-01 0.996 0.004
#> sample_58     1  0.0672   9.34e-01 0.992 0.008
#> sample_60     1  0.2778   9.02e-01 0.952 0.048
#> sample_61     1  0.0672   9.34e-01 0.992 0.008
#> sample_65     1  0.0376   9.35e-01 0.996 0.004
#> sample_66     1  0.9661  -1.94e-01 0.608 0.392
#> sample_63     1  0.0376   9.35e-01 0.996 0.004
#> sample_64     1  0.2043   9.17e-01 0.968 0.032
#> sample_62     1  0.0376   9.35e-01 0.996 0.004
#> sample_1      2  0.8813   8.31e-01 0.300 0.700
#> sample_2      1  0.9393   9.53e-05 0.644 0.356
#> sample_3      2  0.9815   7.07e-01 0.420 0.580
#> sample_4      2  0.8763   8.32e-01 0.296 0.704
#> sample_5      2  0.1184   6.70e-01 0.016 0.984
#> sample_6      2  0.9815   7.07e-01 0.420 0.580
#> sample_7      2  0.8713   8.32e-01 0.292 0.708
#> sample_8      2  0.9000   8.22e-01 0.316 0.684
#> sample_9      2  0.9815   7.07e-01 0.420 0.580
#> sample_10     2  0.9815   7.07e-01 0.420 0.580
#> sample_11     2  0.9815   7.07e-01 0.420 0.580
#> sample_12     1  0.3879   8.60e-01 0.924 0.076
#> sample_13     2  0.1184   6.70e-01 0.016 0.984
#> sample_14     2  0.9815   7.07e-01 0.420 0.580
#> sample_15     2  0.1184   6.70e-01 0.016 0.984
#> sample_16     2  0.8386   8.29e-01 0.268 0.732
#> sample_17     2  0.9732   7.45e-01 0.404 0.596
#> sample_18     2  0.9129   8.22e-01 0.328 0.672
#> sample_19     2  0.8016   8.21e-01 0.244 0.756
#> sample_20     2  0.1184   6.70e-01 0.016 0.984
#> sample_21     2  0.7745   8.04e-01 0.228 0.772
#> sample_22     2  0.9460   7.94e-01 0.364 0.636
#> sample_23     2  0.9815   7.07e-01 0.420 0.580
#> sample_24     2  0.1184   6.70e-01 0.016 0.984
#> sample_25     2  0.9393   8.03e-01 0.356 0.644
#> sample_26     2  0.8608   8.32e-01 0.284 0.716
#> sample_27     2  0.8661   8.32e-01 0.288 0.712
#> sample_34     1  0.0000   9.35e-01 1.000 0.000
#> sample_35     1  0.2423   9.10e-01 0.960 0.040
#> sample_36     1  0.0000   9.35e-01 1.000 0.000
#> sample_37     1  0.0000   9.35e-01 1.000 0.000
#> sample_38     1  0.0376   9.34e-01 0.996 0.004
#> sample_28     1  0.0000   9.35e-01 1.000 0.000
#> sample_29     1  0.2423   9.08e-01 0.960 0.040
#> sample_30     1  0.0000   9.35e-01 1.000 0.000
#> sample_31     1  0.0000   9.35e-01 1.000 0.000
#> sample_32     1  0.0376   9.35e-01 0.996 0.004
#> sample_33     1  0.0000   9.35e-01 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n ALL.AML(p) k
#> SD:mclust 70   2.94e-14 2
#> SD:mclust 60   6.91e-13 3
#> SD:mclust 35   2.99e-07 4
#> SD:mclust 64   2.68e-12 5
#> SD:mclust 58   1.74e-10 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


SD:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["SD", "NMF"]
# you can also extract it by
# res = res_list["SD:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'SD' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk SD-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk SD-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.633           0.823       0.927         0.4892 0.512   0.512
#> 3 3 0.582           0.651       0.826         0.3369 0.809   0.635
#> 4 4 0.517           0.454       0.690         0.1303 0.863   0.632
#> 5 5 0.608           0.548       0.765         0.0718 0.873   0.576
#> 6 6 0.623           0.415       0.697         0.0416 0.903   0.606

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.7950     0.7094 0.240 0.760
#> sample_40     2  0.8499     0.6610 0.276 0.724
#> sample_42     2  0.0000     0.9065 0.000 1.000
#> sample_47     2  0.0000     0.9065 0.000 1.000
#> sample_48     2  0.0000     0.9065 0.000 1.000
#> sample_49     2  0.9833     0.3568 0.424 0.576
#> sample_41     2  0.0000     0.9065 0.000 1.000
#> sample_43     2  0.0000     0.9065 0.000 1.000
#> sample_44     2  0.0000     0.9065 0.000 1.000
#> sample_45     2  0.0000     0.9065 0.000 1.000
#> sample_46     2  0.0000     0.9065 0.000 1.000
#> sample_70     2  0.0000     0.9065 0.000 1.000
#> sample_71     2  0.0000     0.9065 0.000 1.000
#> sample_72     2  0.0000     0.9065 0.000 1.000
#> sample_68     2  0.0000     0.9065 0.000 1.000
#> sample_69     2  0.0000     0.9065 0.000 1.000
#> sample_67     1  0.5737     0.7989 0.864 0.136
#> sample_55     2  0.6623     0.7837 0.172 0.828
#> sample_56     2  0.4939     0.8391 0.108 0.892
#> sample_59     2  0.0000     0.9065 0.000 1.000
#> sample_52     1  0.0000     0.9319 1.000 0.000
#> sample_53     1  0.0000     0.9319 1.000 0.000
#> sample_51     1  0.0000     0.9319 1.000 0.000
#> sample_50     1  0.0000     0.9319 1.000 0.000
#> sample_54     1  0.6148     0.7786 0.848 0.152
#> sample_57     1  0.0000     0.9319 1.000 0.000
#> sample_58     1  0.0000     0.9319 1.000 0.000
#> sample_60     1  0.0376     0.9288 0.996 0.004
#> sample_61     1  0.0000     0.9319 1.000 0.000
#> sample_65     1  0.0000     0.9319 1.000 0.000
#> sample_66     2  0.8763     0.5365 0.296 0.704
#> sample_63     1  0.0000     0.9319 1.000 0.000
#> sample_64     1  0.0000     0.9319 1.000 0.000
#> sample_62     1  0.0000     0.9319 1.000 0.000
#> sample_1      2  0.0000     0.9065 0.000 1.000
#> sample_2      1  0.9833     0.2446 0.576 0.424
#> sample_3      2  0.5946     0.8097 0.144 0.856
#> sample_4      2  0.0000     0.9065 0.000 1.000
#> sample_5      2  0.0000     0.9065 0.000 1.000
#> sample_6      2  0.8861     0.6154 0.304 0.696
#> sample_7      2  0.5178     0.8332 0.116 0.884
#> sample_8      2  1.0000     0.1290 0.496 0.504
#> sample_9      2  0.0000     0.9065 0.000 1.000
#> sample_10     2  0.9635     0.4449 0.388 0.612
#> sample_11     2  0.0000     0.9065 0.000 1.000
#> sample_12     1  0.0376     0.9287 0.996 0.004
#> sample_13     2  0.0000     0.9065 0.000 1.000
#> sample_14     2  0.0000     0.9065 0.000 1.000
#> sample_15     2  0.0000     0.9065 0.000 1.000
#> sample_16     2  0.0000     0.9065 0.000 1.000
#> sample_17     2  0.1843     0.8879 0.028 0.972
#> sample_18     2  0.0672     0.9026 0.008 0.992
#> sample_19     2  0.0000     0.9065 0.000 1.000
#> sample_20     2  0.0000     0.9065 0.000 1.000
#> sample_21     2  0.0000     0.9065 0.000 1.000
#> sample_22     1  0.9963    -0.0252 0.536 0.464
#> sample_23     2  0.8499     0.6609 0.276 0.724
#> sample_24     2  0.0000     0.9065 0.000 1.000
#> sample_25     2  0.8555     0.6536 0.280 0.720
#> sample_26     2  0.0000     0.9065 0.000 1.000
#> sample_27     1  0.9988    -0.0885 0.520 0.480
#> sample_34     1  0.0000     0.9319 1.000 0.000
#> sample_35     1  0.0000     0.9319 1.000 0.000
#> sample_36     1  0.0000     0.9319 1.000 0.000
#> sample_37     1  0.0000     0.9319 1.000 0.000
#> sample_38     1  0.0000     0.9319 1.000 0.000
#> sample_28     1  0.0000     0.9319 1.000 0.000
#> sample_29     1  0.0000     0.9319 1.000 0.000
#> sample_30     1  0.0000     0.9319 1.000 0.000
#> sample_31     1  0.0000     0.9319 1.000 0.000
#> sample_32     1  0.0000     0.9319 1.000 0.000
#> sample_33     1  0.0000     0.9319 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-SD-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-SD-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-SD-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-SD-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk SD-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-SD-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk SD-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n ALL.AML(p) k
#> SD:NMF 66   1.35e-12 2
#> SD:NMF 59   7.38e-12 3
#> SD:NMF 38   5.34e-09 4
#> SD:NMF 49   9.13e-09 5
#> SD:NMF 33   3.22e-07 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "hclust"]
# you can also extract it by
# res = res_list["CV:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'CV' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.214           0.570       0.805         0.4538 0.512   0.512
#> 3 3 0.226           0.635       0.734         0.3928 0.723   0.505
#> 4 4 0.370           0.544       0.706         0.1323 0.962   0.883
#> 5 5 0.503           0.450       0.665         0.0734 0.922   0.748
#> 6 6 0.555           0.431       0.637         0.0435 0.918   0.708

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  1.0000     -0.189 0.504 0.496
#> sample_40     1  0.9661      0.219 0.608 0.392
#> sample_42     2  0.6247      0.685 0.156 0.844
#> sample_47     2  0.3584      0.702 0.068 0.932
#> sample_48     2  0.0000      0.693 0.000 1.000
#> sample_49     1  0.9754      0.142 0.592 0.408
#> sample_41     2  0.0000      0.693 0.000 1.000
#> sample_43     2  0.4939      0.699 0.108 0.892
#> sample_44     2  0.4690      0.704 0.100 0.900
#> sample_45     2  0.4431      0.705 0.092 0.908
#> sample_46     2  0.6247      0.688 0.156 0.844
#> sample_70     2  0.6247      0.688 0.156 0.844
#> sample_71     2  0.8499      0.597 0.276 0.724
#> sample_72     2  0.8499      0.597 0.276 0.724
#> sample_68     2  0.0000      0.693 0.000 1.000
#> sample_69     2  0.0000      0.693 0.000 1.000
#> sample_67     2  0.9170      0.474 0.332 0.668
#> sample_55     1  0.9998     -0.167 0.508 0.492
#> sample_56     2  1.0000      0.165 0.496 0.504
#> sample_59     2  0.9850      0.360 0.428 0.572
#> sample_52     1  0.3274      0.799 0.940 0.060
#> sample_53     1  0.0672      0.787 0.992 0.008
#> sample_51     1  0.0376      0.784 0.996 0.004
#> sample_50     1  0.0376      0.784 0.996 0.004
#> sample_54     1  0.4431      0.789 0.908 0.092
#> sample_57     1  0.4022      0.794 0.920 0.080
#> sample_58     1  0.1843      0.796 0.972 0.028
#> sample_60     1  0.4298      0.790 0.912 0.088
#> sample_61     1  0.5737      0.740 0.864 0.136
#> sample_65     1  0.3114      0.800 0.944 0.056
#> sample_66     2  0.8016      0.538 0.244 0.756
#> sample_63     1  0.3431      0.799 0.936 0.064
#> sample_64     1  0.8144      0.577 0.748 0.252
#> sample_62     1  0.3431      0.799 0.936 0.064
#> sample_1      2  0.9866      0.355 0.432 0.568
#> sample_2      2  0.9000      0.508 0.316 0.684
#> sample_3      2  0.9850      0.356 0.428 0.572
#> sample_4      2  0.9815      0.370 0.420 0.580
#> sample_5      2  0.0000      0.693 0.000 1.000
#> sample_6      2  0.9850      0.356 0.428 0.572
#> sample_7      2  0.9815      0.370 0.420 0.580
#> sample_8      2  0.9998      0.181 0.492 0.508
#> sample_9      2  0.9833      0.363 0.424 0.576
#> sample_10     2  0.9850      0.356 0.428 0.572
#> sample_11     2  0.9833      0.363 0.424 0.576
#> sample_12     1  0.9815      0.102 0.580 0.420
#> sample_13     2  0.0000      0.693 0.000 1.000
#> sample_14     2  0.4939      0.699 0.108 0.892
#> sample_15     2  0.0000      0.693 0.000 1.000
#> sample_16     2  0.4562      0.705 0.096 0.904
#> sample_17     2  0.4022      0.699 0.080 0.920
#> sample_18     2  0.9661      0.447 0.392 0.608
#> sample_19     2  0.4562      0.705 0.096 0.904
#> sample_20     2  0.0000      0.693 0.000 1.000
#> sample_21     2  0.0000      0.693 0.000 1.000
#> sample_22     2  0.9998      0.187 0.492 0.508
#> sample_23     2  0.9850      0.356 0.428 0.572
#> sample_24     2  0.0000      0.693 0.000 1.000
#> sample_25     2  0.9993      0.232 0.484 0.516
#> sample_26     2  0.5178      0.703 0.116 0.884
#> sample_27     1  0.9754      0.142 0.592 0.408
#> sample_34     1  0.5408      0.757 0.876 0.124
#> sample_35     1  0.7528      0.638 0.784 0.216
#> sample_36     1  0.0938      0.787 0.988 0.012
#> sample_37     1  0.0938      0.786 0.988 0.012
#> sample_38     1  0.4690      0.758 0.900 0.100
#> sample_28     1  0.2603      0.788 0.956 0.044
#> sample_29     2  0.9896      0.205 0.440 0.560
#> sample_30     1  0.0938      0.787 0.988 0.012
#> sample_31     1  0.2948      0.799 0.948 0.052
#> sample_32     1  0.3114      0.798 0.944 0.056
#> sample_33     1  0.0376      0.784 0.996 0.004

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n ALL.AML(p) k
#> CV:hclust 49   1.25e-10 2
#> CV:hclust 59   1.23e-12 3
#> CV:hclust 54   1.06e-10 4
#> CV:hclust 35   4.65e-07 5
#> CV:hclust 30   1.38e-06 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "kmeans"]
# you can also extract it by
# res = res_list["CV:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'CV' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.653           0.806       0.905         0.4836 0.532   0.532
#> 3 3 0.739           0.864       0.901         0.3661 0.774   0.583
#> 4 4 0.615           0.639       0.792         0.1150 0.957   0.869
#> 5 5 0.687           0.620       0.740         0.0721 0.862   0.553
#> 6 6 0.727           0.715       0.802         0.0472 0.915   0.614

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.9754     0.5141 0.408 0.592
#> sample_40     2  0.9850     0.4780 0.428 0.572
#> sample_42     2  0.0000     0.8470 0.000 1.000
#> sample_47     2  0.0000     0.8470 0.000 1.000
#> sample_48     2  0.0000     0.8470 0.000 1.000
#> sample_49     2  0.9896     0.4537 0.440 0.560
#> sample_41     2  0.0000     0.8470 0.000 1.000
#> sample_43     2  0.0000     0.8470 0.000 1.000
#> sample_44     2  0.0376     0.8459 0.004 0.996
#> sample_45     2  0.0000     0.8470 0.000 1.000
#> sample_46     2  0.0000     0.8470 0.000 1.000
#> sample_70     2  0.1184     0.8426 0.016 0.984
#> sample_71     2  0.0000     0.8470 0.000 1.000
#> sample_72     2  0.0000     0.8470 0.000 1.000
#> sample_68     2  0.0000     0.8470 0.000 1.000
#> sample_69     2  0.0000     0.8470 0.000 1.000
#> sample_67     1  0.9988     0.0928 0.520 0.480
#> sample_55     2  0.9635     0.5459 0.388 0.612
#> sample_56     2  0.9775     0.5071 0.412 0.588
#> sample_59     2  0.0376     0.8459 0.004 0.996
#> sample_52     1  0.0376     0.9659 0.996 0.004
#> sample_53     1  0.0376     0.9659 0.996 0.004
#> sample_51     1  0.0376     0.9659 0.996 0.004
#> sample_50     1  0.0376     0.9659 0.996 0.004
#> sample_54     1  0.2603     0.9261 0.956 0.044
#> sample_57     1  0.0000     0.9643 1.000 0.000
#> sample_58     1  0.0000     0.9643 1.000 0.000
#> sample_60     1  0.0376     0.9659 0.996 0.004
#> sample_61     1  0.0000     0.9643 1.000 0.000
#> sample_65     1  0.0000     0.9643 1.000 0.000
#> sample_66     2  0.2603     0.8181 0.044 0.956
#> sample_63     1  0.0376     0.9659 0.996 0.004
#> sample_64     1  0.0000     0.9643 1.000 0.000
#> sample_62     1  0.0376     0.9659 0.996 0.004
#> sample_1      2  0.1843     0.8376 0.028 0.972
#> sample_2      2  0.5059     0.7626 0.112 0.888
#> sample_3      2  0.9580     0.5576 0.380 0.620
#> sample_4      2  0.1414     0.8411 0.020 0.980
#> sample_5      2  0.0000     0.8470 0.000 1.000
#> sample_6      2  0.9580     0.5576 0.380 0.620
#> sample_7      2  0.9580     0.5575 0.380 0.620
#> sample_8      2  0.9881     0.4623 0.436 0.564
#> sample_9      2  0.0376     0.8459 0.004 0.996
#> sample_10     2  0.9580     0.5576 0.380 0.620
#> sample_11     2  0.0000     0.8470 0.000 1.000
#> sample_12     1  0.0376     0.9659 0.996 0.004
#> sample_13     2  0.0000     0.8470 0.000 1.000
#> sample_14     2  0.0000     0.8470 0.000 1.000
#> sample_15     2  0.0000     0.8470 0.000 1.000
#> sample_16     2  0.0376     0.8459 0.004 0.996
#> sample_17     2  0.0000     0.8470 0.000 1.000
#> sample_18     2  0.7883     0.6990 0.236 0.764
#> sample_19     2  0.0000     0.8470 0.000 1.000
#> sample_20     2  0.0000     0.8470 0.000 1.000
#> sample_21     2  0.0000     0.8470 0.000 1.000
#> sample_22     2  0.9881     0.4624 0.436 0.564
#> sample_23     2  0.9580     0.5576 0.380 0.620
#> sample_24     2  0.0000     0.8470 0.000 1.000
#> sample_25     2  0.9460     0.5722 0.364 0.636
#> sample_26     2  0.0376     0.8459 0.004 0.996
#> sample_27     2  0.9896     0.4537 0.440 0.560
#> sample_34     1  0.0000     0.9643 1.000 0.000
#> sample_35     1  0.0000     0.9643 1.000 0.000
#> sample_36     1  0.0376     0.9659 0.996 0.004
#> sample_37     1  0.0376     0.9659 0.996 0.004
#> sample_38     1  0.0376     0.9659 0.996 0.004
#> sample_28     1  0.0376     0.9659 0.996 0.004
#> sample_29     1  0.6531     0.7554 0.832 0.168
#> sample_30     1  0.0376     0.9659 0.996 0.004
#> sample_31     1  0.0000     0.9643 1.000 0.000
#> sample_32     1  0.0376     0.9659 0.996 0.004
#> sample_33     1  0.0376     0.9659 0.996 0.004

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n ALL.AML(p) k
#> CV:kmeans 66   2.15e-13 2
#> CV:kmeans 68   1.06e-13 3
#> CV:kmeans 57   1.78e-11 4
#> CV:kmeans 57   7.59e-11 5
#> CV:kmeans 61   5.20e-11 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "skmeans"]
# you can also extract it by
# res = res_list["CV:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'CV' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.629           0.748       0.890         0.4970 0.495   0.495
#> 3 3 0.837           0.857       0.938         0.3466 0.743   0.525
#> 4 4 0.655           0.698       0.828         0.1068 0.896   0.700
#> 5 5 0.617           0.503       0.671         0.0677 0.887   0.609
#> 6 6 0.629           0.505       0.717         0.0443 0.884   0.524

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1   0.994      0.403 0.544 0.456
#> sample_40     1   0.993      0.410 0.548 0.452
#> sample_42     2   0.000      0.929 0.000 1.000
#> sample_47     2   0.000      0.929 0.000 1.000
#> sample_48     2   0.000      0.929 0.000 1.000
#> sample_49     1   0.985      0.449 0.572 0.428
#> sample_41     2   0.000      0.929 0.000 1.000
#> sample_43     2   0.000      0.929 0.000 1.000
#> sample_44     2   0.000      0.929 0.000 1.000
#> sample_45     2   0.000      0.929 0.000 1.000
#> sample_46     2   0.000      0.929 0.000 1.000
#> sample_70     2   0.000      0.929 0.000 1.000
#> sample_71     2   0.000      0.929 0.000 1.000
#> sample_72     2   0.000      0.929 0.000 1.000
#> sample_68     2   0.000      0.929 0.000 1.000
#> sample_69     2   0.000      0.929 0.000 1.000
#> sample_67     2   0.995      0.155 0.460 0.540
#> sample_55     1   0.995      0.393 0.540 0.460
#> sample_56     1   0.995      0.395 0.540 0.460
#> sample_59     2   0.000      0.929 0.000 1.000
#> sample_52     1   0.000      0.803 1.000 0.000
#> sample_53     1   0.000      0.803 1.000 0.000
#> sample_51     1   0.000      0.803 1.000 0.000
#> sample_50     1   0.000      0.803 1.000 0.000
#> sample_54     1   0.506      0.712 0.888 0.112
#> sample_57     1   0.000      0.803 1.000 0.000
#> sample_58     1   0.000      0.803 1.000 0.000
#> sample_60     1   0.000      0.803 1.000 0.000
#> sample_61     1   0.000      0.803 1.000 0.000
#> sample_65     1   0.000      0.803 1.000 0.000
#> sample_66     2   0.981      0.246 0.420 0.580
#> sample_63     1   0.000      0.803 1.000 0.000
#> sample_64     1   0.000      0.803 1.000 0.000
#> sample_62     1   0.000      0.803 1.000 0.000
#> sample_1      2   0.000      0.929 0.000 1.000
#> sample_2      2   0.993      0.174 0.452 0.548
#> sample_3      1   0.996      0.386 0.536 0.464
#> sample_4      2   0.000      0.929 0.000 1.000
#> sample_5      2   0.000      0.929 0.000 1.000
#> sample_6      1   0.996      0.386 0.536 0.464
#> sample_7      1   0.998      0.357 0.524 0.476
#> sample_8      1   0.987      0.442 0.568 0.432
#> sample_9      2   0.000      0.929 0.000 1.000
#> sample_10     1   0.925      0.552 0.660 0.340
#> sample_11     2   0.000      0.929 0.000 1.000
#> sample_12     1   0.000      0.803 1.000 0.000
#> sample_13     2   0.000      0.929 0.000 1.000
#> sample_14     2   0.000      0.929 0.000 1.000
#> sample_15     2   0.000      0.929 0.000 1.000
#> sample_16     2   0.000      0.929 0.000 1.000
#> sample_17     2   0.402      0.840 0.080 0.920
#> sample_18     2   0.443      0.817 0.092 0.908
#> sample_19     2   0.000      0.929 0.000 1.000
#> sample_20     2   0.000      0.929 0.000 1.000
#> sample_21     2   0.000      0.929 0.000 1.000
#> sample_22     1   0.981      0.460 0.580 0.420
#> sample_23     1   0.994      0.403 0.544 0.456
#> sample_24     2   0.000      0.929 0.000 1.000
#> sample_25     2   0.855      0.457 0.280 0.720
#> sample_26     2   0.000      0.929 0.000 1.000
#> sample_27     1   0.983      0.454 0.576 0.424
#> sample_34     1   0.000      0.803 1.000 0.000
#> sample_35     1   0.000      0.803 1.000 0.000
#> sample_36     1   0.000      0.803 1.000 0.000
#> sample_37     1   0.000      0.803 1.000 0.000
#> sample_38     1   0.000      0.803 1.000 0.000
#> sample_28     1   0.000      0.803 1.000 0.000
#> sample_29     1   0.861      0.476 0.716 0.284
#> sample_30     1   0.000      0.803 1.000 0.000
#> sample_31     1   0.000      0.803 1.000 0.000
#> sample_32     1   0.000      0.803 1.000 0.000
#> sample_33     1   0.000      0.803 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> CV:skmeans 55   3.77e-11 2
#> CV:skmeans 67   1.71e-13 3
#> CV:skmeans 60   4.20e-12 4
#> CV:skmeans 45   9.25e-10 5
#> CV:skmeans 47   9.43e-09 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "pam"]
# you can also extract it by
# res = res_list["CV:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'CV' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.546           0.784       0.904         0.4958 0.496   0.496
#> 3 3 0.462           0.713       0.835         0.3262 0.754   0.539
#> 4 4 0.608           0.723       0.822         0.1157 0.937   0.812
#> 5 5 0.633           0.600       0.793         0.0736 0.892   0.635
#> 6 6 0.651           0.483       0.711         0.0433 0.962   0.830

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.9909      0.103 0.444 0.556
#> sample_40     1  0.7745      0.739 0.772 0.228
#> sample_42     2  0.2603      0.886 0.044 0.956
#> sample_47     2  0.0000      0.909 0.000 1.000
#> sample_48     2  0.0000      0.909 0.000 1.000
#> sample_49     1  0.6438      0.799 0.836 0.164
#> sample_41     2  0.0000      0.909 0.000 1.000
#> sample_43     2  0.0000      0.909 0.000 1.000
#> sample_44     2  0.0000      0.909 0.000 1.000
#> sample_45     2  0.0000      0.909 0.000 1.000
#> sample_46     2  0.0000      0.909 0.000 1.000
#> sample_70     1  0.9963      0.242 0.536 0.464
#> sample_71     2  0.3584      0.868 0.068 0.932
#> sample_72     2  0.0672      0.904 0.008 0.992
#> sample_68     2  0.0000      0.909 0.000 1.000
#> sample_69     2  0.0000      0.909 0.000 1.000
#> sample_67     2  0.9170      0.545 0.332 0.668
#> sample_55     1  0.6973      0.781 0.812 0.188
#> sample_56     1  1.0000      0.134 0.500 0.500
#> sample_59     2  0.8386      0.571 0.268 0.732
#> sample_52     1  0.0000      0.872 1.000 0.000
#> sample_53     1  0.0000      0.872 1.000 0.000
#> sample_51     1  0.0000      0.872 1.000 0.000
#> sample_50     1  0.0000      0.872 1.000 0.000
#> sample_54     1  0.6148      0.779 0.848 0.152
#> sample_57     1  0.0000      0.872 1.000 0.000
#> sample_58     1  0.0000      0.872 1.000 0.000
#> sample_60     1  0.2603      0.861 0.956 0.044
#> sample_61     1  0.0000      0.872 1.000 0.000
#> sample_65     1  0.0000      0.872 1.000 0.000
#> sample_66     2  0.8386      0.653 0.268 0.732
#> sample_63     1  0.0000      0.872 1.000 0.000
#> sample_64     1  0.3274      0.855 0.940 0.060
#> sample_62     1  0.1843      0.865 0.972 0.028
#> sample_1      2  0.0938      0.903 0.012 0.988
#> sample_2      2  0.8081      0.684 0.248 0.752
#> sample_3      1  0.6712      0.791 0.824 0.176
#> sample_4      1  0.9988      0.223 0.520 0.480
#> sample_5      2  0.0000      0.909 0.000 1.000
#> sample_6      1  0.6048      0.809 0.852 0.148
#> sample_7      1  0.7139      0.774 0.804 0.196
#> sample_8      1  0.9977      0.199 0.528 0.472
#> sample_9      2  0.9833      0.124 0.424 0.576
#> sample_10     1  0.6438      0.798 0.836 0.164
#> sample_11     2  0.3431      0.872 0.064 0.936
#> sample_12     1  0.0376      0.871 0.996 0.004
#> sample_13     2  0.0000      0.909 0.000 1.000
#> sample_14     2  0.1184      0.901 0.016 0.984
#> sample_15     2  0.0000      0.909 0.000 1.000
#> sample_16     2  0.0000      0.909 0.000 1.000
#> sample_17     2  0.0000      0.909 0.000 1.000
#> sample_18     2  0.4298      0.851 0.088 0.912
#> sample_19     2  0.0000      0.909 0.000 1.000
#> sample_20     2  0.0000      0.909 0.000 1.000
#> sample_21     2  0.0000      0.909 0.000 1.000
#> sample_22     1  0.9881      0.262 0.564 0.436
#> sample_23     1  0.7745      0.735 0.772 0.228
#> sample_24     2  0.0000      0.909 0.000 1.000
#> sample_25     2  0.8499      0.616 0.276 0.724
#> sample_26     2  0.0000      0.909 0.000 1.000
#> sample_27     1  0.6148      0.807 0.848 0.152
#> sample_34     1  0.0000      0.872 1.000 0.000
#> sample_35     1  0.0376      0.871 0.996 0.004
#> sample_36     1  0.0000      0.872 1.000 0.000
#> sample_37     1  0.0000      0.872 1.000 0.000
#> sample_38     1  0.0376      0.871 0.996 0.004
#> sample_28     1  0.0000      0.872 1.000 0.000
#> sample_29     1  0.4431      0.823 0.908 0.092
#> sample_30     1  0.0000      0.872 1.000 0.000
#> sample_31     1  0.0000      0.872 1.000 0.000
#> sample_32     1  0.0000      0.872 1.000 0.000
#> sample_33     1  0.0000      0.872 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n ALL.AML(p) k
#> CV:pam 65   1.04e-07 2
#> CV:pam 61   4.37e-13 3
#> CV:pam 64   4.11e-12 4
#> CV:pam 50   3.20e-09 5
#> CV:pam 38   2.03e-07 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "mclust"]
# you can also extract it by
# res = res_list["CV:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'CV' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.489           0.908       0.895         0.4313 0.532   0.532
#> 3 3 0.641           0.708       0.817         0.4234 0.796   0.625
#> 4 4 0.550           0.668       0.806         0.1326 0.768   0.457
#> 5 5 0.774           0.787       0.849         0.1273 0.905   0.670
#> 6 6 0.751           0.617       0.762         0.0509 0.922   0.653

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.0938     0.9237 0.012 0.988
#> sample_40     2  0.0938     0.9237 0.012 0.988
#> sample_42     2  0.1633     0.9172 0.024 0.976
#> sample_47     2  0.1414     0.9207 0.020 0.980
#> sample_48     2  0.6438     0.8294 0.164 0.836
#> sample_49     2  0.1184     0.9225 0.016 0.984
#> sample_41     2  0.6438     0.8294 0.164 0.836
#> sample_43     2  0.0376     0.9242 0.004 0.996
#> sample_44     2  0.0938     0.9233 0.012 0.988
#> sample_45     2  0.1843     0.9169 0.028 0.972
#> sample_46     2  0.1633     0.9189 0.024 0.976
#> sample_70     2  0.0376     0.9242 0.004 0.996
#> sample_71     2  0.2043     0.9117 0.032 0.968
#> sample_72     2  0.1633     0.9172 0.024 0.976
#> sample_68     2  0.6438     0.8294 0.164 0.836
#> sample_69     2  0.6438     0.8294 0.164 0.836
#> sample_67     1  0.8207     0.8977 0.744 0.256
#> sample_55     2  0.1633     0.9178 0.024 0.976
#> sample_56     2  0.1184     0.9225 0.016 0.984
#> sample_59     2  0.0672     0.9238 0.008 0.992
#> sample_52     1  0.6712     0.9874 0.824 0.176
#> sample_53     1  0.6623     0.9851 0.828 0.172
#> sample_51     1  0.6623     0.9851 0.828 0.172
#> sample_50     1  0.6623     0.9851 0.828 0.172
#> sample_54     1  0.6801     0.9858 0.820 0.180
#> sample_57     1  0.6712     0.9874 0.824 0.176
#> sample_58     1  0.6712     0.9874 0.824 0.176
#> sample_60     1  0.7139     0.9735 0.804 0.196
#> sample_61     1  0.6801     0.9858 0.820 0.180
#> sample_65     1  0.6712     0.9874 0.824 0.176
#> sample_66     2  0.8955     0.3909 0.312 0.688
#> sample_63     1  0.6712     0.9874 0.824 0.176
#> sample_64     1  0.7453     0.9577 0.788 0.212
#> sample_62     1  0.6712     0.9874 0.824 0.176
#> sample_1      2  0.0672     0.9238 0.008 0.992
#> sample_2      2  0.9850    -0.0705 0.428 0.572
#> sample_3      2  0.1633     0.9213 0.024 0.976
#> sample_4      2  0.0672     0.9238 0.008 0.992
#> sample_5      2  0.6438     0.8294 0.164 0.836
#> sample_6      2  0.1633     0.9213 0.024 0.976
#> sample_7      2  0.0672     0.9238 0.008 0.992
#> sample_8      2  0.0938     0.9237 0.012 0.988
#> sample_9      2  0.0938     0.9225 0.012 0.988
#> sample_10     2  0.1843     0.9130 0.028 0.972
#> sample_11     2  0.2043     0.9117 0.032 0.968
#> sample_12     1  0.7376     0.9621 0.792 0.208
#> sample_13     2  0.6438     0.8294 0.164 0.836
#> sample_14     2  0.1184     0.9239 0.016 0.984
#> sample_15     2  0.6438     0.8294 0.164 0.836
#> sample_16     2  0.0376     0.9242 0.004 0.996
#> sample_17     2  0.1414     0.9202 0.020 0.980
#> sample_18     2  0.0672     0.9238 0.008 0.992
#> sample_19     2  0.0938     0.9235 0.012 0.988
#> sample_20     2  0.6438     0.8294 0.164 0.836
#> sample_21     2  0.5519     0.8538 0.128 0.872
#> sample_22     2  0.1633     0.9159 0.024 0.976
#> sample_23     2  0.1633     0.9213 0.024 0.976
#> sample_24     2  0.6438     0.8294 0.164 0.836
#> sample_25     2  0.1843     0.9147 0.028 0.972
#> sample_26     2  0.0672     0.9238 0.008 0.992
#> sample_27     2  0.1184     0.9225 0.016 0.984
#> sample_34     1  0.6712     0.9874 0.824 0.176
#> sample_35     1  0.7453     0.9577 0.788 0.212
#> sample_36     1  0.6623     0.9851 0.828 0.172
#> sample_37     1  0.6623     0.9851 0.828 0.172
#> sample_38     1  0.6801     0.9857 0.820 0.180
#> sample_28     1  0.6712     0.9874 0.824 0.176
#> sample_29     1  0.6973     0.9801 0.812 0.188
#> sample_30     1  0.6623     0.9851 0.828 0.172
#> sample_31     1  0.6712     0.9874 0.824 0.176
#> sample_32     1  0.6712     0.9874 0.824 0.176
#> sample_33     1  0.6712     0.9874 0.824 0.176

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>            n ALL.AML(p) k
#> CV:mclust 70   2.94e-14 2
#> CV:mclust 62   2.60e-13 3
#> CV:mclust 62   1.78e-12 4
#> CV:mclust 66   9.88e-13 5
#> CV:mclust 53   1.78e-09 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


CV:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["CV", "NMF"]
# you can also extract it by
# res = res_list["CV:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'CV' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk CV-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk CV-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.633           0.787       0.916         0.4909 0.507   0.507
#> 3 3 0.555           0.703       0.837         0.3442 0.737   0.521
#> 4 4 0.497           0.556       0.731         0.1269 0.869   0.633
#> 5 5 0.595           0.529       0.738         0.0700 0.874   0.573
#> 6 6 0.599           0.459       0.665         0.0441 0.895   0.558

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.9522    0.47506 0.372 0.628
#> sample_40     2  0.9044    0.57502 0.320 0.680
#> sample_42     2  0.0376    0.88613 0.004 0.996
#> sample_47     2  0.0000    0.88809 0.000 1.000
#> sample_48     2  0.0000    0.88809 0.000 1.000
#> sample_49     2  0.9977    0.21342 0.472 0.528
#> sample_41     2  0.0000    0.88809 0.000 1.000
#> sample_43     2  0.0000    0.88809 0.000 1.000
#> sample_44     2  0.0000    0.88809 0.000 1.000
#> sample_45     2  0.0000    0.88809 0.000 1.000
#> sample_46     2  0.0000    0.88809 0.000 1.000
#> sample_70     2  0.0376    0.88616 0.004 0.996
#> sample_71     2  0.0000    0.88809 0.000 1.000
#> sample_72     2  0.0000    0.88809 0.000 1.000
#> sample_68     2  0.0000    0.88809 0.000 1.000
#> sample_69     2  0.0000    0.88809 0.000 1.000
#> sample_67     1  0.5629    0.78714 0.868 0.132
#> sample_55     2  0.8207    0.67159 0.256 0.744
#> sample_56     2  0.8327    0.65857 0.264 0.736
#> sample_59     2  0.0000    0.88809 0.000 1.000
#> sample_52     1  0.0000    0.91740 1.000 0.000
#> sample_53     1  0.0000    0.91740 1.000 0.000
#> sample_51     1  0.0000    0.91740 1.000 0.000
#> sample_50     1  0.0000    0.91740 1.000 0.000
#> sample_54     1  0.4690    0.82332 0.900 0.100
#> sample_57     1  0.0000    0.91740 1.000 0.000
#> sample_58     1  0.0000    0.91740 1.000 0.000
#> sample_60     1  0.0376    0.91468 0.996 0.004
#> sample_61     1  0.0000    0.91740 1.000 0.000
#> sample_65     1  0.0000    0.91740 1.000 0.000
#> sample_66     2  0.9000    0.47555 0.316 0.684
#> sample_63     1  0.0000    0.91740 1.000 0.000
#> sample_64     1  0.0000    0.91740 1.000 0.000
#> sample_62     1  0.0000    0.91740 1.000 0.000
#> sample_1      2  0.0000    0.88809 0.000 1.000
#> sample_2      1  0.9866    0.21416 0.568 0.432
#> sample_3      2  0.8386    0.65292 0.268 0.732
#> sample_4      2  0.0000    0.88809 0.000 1.000
#> sample_5      2  0.0000    0.88809 0.000 1.000
#> sample_6      2  0.9323    0.52371 0.348 0.652
#> sample_7      2  0.6343    0.76990 0.160 0.840
#> sample_8      1  0.9970   -0.03983 0.532 0.468
#> sample_9      2  0.0000    0.88809 0.000 1.000
#> sample_10     1  0.9954    0.00334 0.540 0.460
#> sample_11     2  0.0000    0.88809 0.000 1.000
#> sample_12     1  0.0376    0.91457 0.996 0.004
#> sample_13     2  0.0000    0.88809 0.000 1.000
#> sample_14     2  0.0000    0.88809 0.000 1.000
#> sample_15     2  0.0000    0.88809 0.000 1.000
#> sample_16     2  0.0000    0.88809 0.000 1.000
#> sample_17     2  0.0376    0.88563 0.004 0.996
#> sample_18     2  0.3114    0.85344 0.056 0.944
#> sample_19     2  0.0000    0.88809 0.000 1.000
#> sample_20     2  0.0000    0.88809 0.000 1.000
#> sample_21     2  0.0000    0.88809 0.000 1.000
#> sample_22     2  0.9983    0.19967 0.476 0.524
#> sample_23     2  0.9909    0.29755 0.444 0.556
#> sample_24     2  0.0000    0.88809 0.000 1.000
#> sample_25     2  0.8499    0.64237 0.276 0.724
#> sample_26     2  0.0000    0.88809 0.000 1.000
#> sample_27     1  0.9833    0.13031 0.576 0.424
#> sample_34     1  0.0000    0.91740 1.000 0.000
#> sample_35     1  0.0000    0.91740 1.000 0.000
#> sample_36     1  0.0000    0.91740 1.000 0.000
#> sample_37     1  0.0000    0.91740 1.000 0.000
#> sample_38     1  0.0000    0.91740 1.000 0.000
#> sample_28     1  0.0000    0.91740 1.000 0.000
#> sample_29     1  0.0376    0.91468 0.996 0.004
#> sample_30     1  0.0000    0.91740 1.000 0.000
#> sample_31     1  0.0000    0.91740 1.000 0.000
#> sample_32     1  0.0000    0.91740 1.000 0.000
#> sample_33     1  0.0000    0.91740 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-CV-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-CV-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-CV-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-CV-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk CV-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-CV-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk CV-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>         n ALL.AML(p) k
#> CV:NMF 63   7.82e-13 2
#> CV:NMF 61   1.74e-11 3
#> CV:NMF 48   3.78e-11 4
#> CV:NMF 47   4.06e-09 5
#> CV:NMF 37   1.80e-07 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "hclust"]
# you can also extract it by
# res = res_list["MAD:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'MAD' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.227           0.737       0.838         0.4437 0.549   0.549
#> 3 3 0.243           0.613       0.697         0.4206 0.759   0.569
#> 4 4 0.409           0.566       0.706         0.1373 0.949   0.845
#> 5 5 0.573           0.598       0.745         0.0754 0.901   0.674
#> 6 6 0.663           0.590       0.726         0.0402 1.000   1.000

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.9754      0.591 0.408 0.592
#> sample_40     1  0.9933     -0.217 0.548 0.452
#> sample_42     2  0.3733      0.765 0.072 0.928
#> sample_47     2  0.3733      0.770 0.072 0.928
#> sample_48     2  0.0000      0.746 0.000 1.000
#> sample_49     2  0.9754      0.589 0.408 0.592
#> sample_41     2  0.0000      0.746 0.000 1.000
#> sample_43     2  0.3114      0.765 0.056 0.944
#> sample_44     2  0.5842      0.775 0.140 0.860
#> sample_45     2  0.6438      0.773 0.164 0.836
#> sample_46     2  0.6801      0.770 0.180 0.820
#> sample_70     2  0.7453      0.762 0.212 0.788
#> sample_71     2  0.6531      0.750 0.168 0.832
#> sample_72     2  0.6531      0.750 0.168 0.832
#> sample_68     2  0.0000      0.746 0.000 1.000
#> sample_69     2  0.0000      0.746 0.000 1.000
#> sample_67     2  0.9491      0.409 0.368 0.632
#> sample_55     2  0.9427      0.659 0.360 0.640
#> sample_56     2  0.9686      0.608 0.396 0.604
#> sample_59     2  0.8608      0.729 0.284 0.716
#> sample_52     1  0.4939      0.859 0.892 0.108
#> sample_53     1  0.0938      0.892 0.988 0.012
#> sample_51     1  0.0000      0.889 1.000 0.000
#> sample_50     1  0.0000      0.889 1.000 0.000
#> sample_54     1  0.6148      0.818 0.848 0.152
#> sample_57     1  0.5842      0.826 0.860 0.140
#> sample_58     1  0.0672      0.892 0.992 0.008
#> sample_60     1  0.6148      0.818 0.848 0.152
#> sample_61     1  0.0672      0.892 0.992 0.008
#> sample_65     1  0.0672      0.892 0.992 0.008
#> sample_66     2  0.7219      0.642 0.200 0.800
#> sample_63     1  0.4939      0.859 0.892 0.108
#> sample_64     1  0.6801      0.751 0.820 0.180
#> sample_62     1  0.4939      0.859 0.892 0.108
#> sample_1      2  0.9460      0.647 0.364 0.636
#> sample_2      2  0.9427      0.441 0.360 0.640
#> sample_3      2  0.8327      0.734 0.264 0.736
#> sample_4      2  0.9522      0.639 0.372 0.628
#> sample_5      2  0.0000      0.746 0.000 1.000
#> sample_6      2  0.8327      0.734 0.264 0.736
#> sample_7      2  0.9522      0.639 0.372 0.628
#> sample_8      2  0.9710      0.602 0.400 0.600
#> sample_9      2  0.8327      0.734 0.264 0.736
#> sample_10     2  0.8327      0.734 0.264 0.736
#> sample_11     2  0.8327      0.734 0.264 0.736
#> sample_12     2  0.9866      0.551 0.432 0.568
#> sample_13     2  0.0000      0.746 0.000 1.000
#> sample_14     2  0.3879      0.752 0.076 0.924
#> sample_15     2  0.0000      0.746 0.000 1.000
#> sample_16     2  0.4939      0.776 0.108 0.892
#> sample_17     2  0.3879      0.742 0.076 0.924
#> sample_18     2  0.9248      0.681 0.340 0.660
#> sample_19     2  0.4939      0.776 0.108 0.892
#> sample_20     2  0.0000      0.746 0.000 1.000
#> sample_21     2  0.0000      0.746 0.000 1.000
#> sample_22     2  0.9710      0.604 0.400 0.600
#> sample_23     2  0.8327      0.734 0.264 0.736
#> sample_24     2  0.0000      0.746 0.000 1.000
#> sample_25     2  0.9795      0.582 0.416 0.584
#> sample_26     2  0.5408      0.776 0.124 0.876
#> sample_27     2  0.9754      0.589 0.408 0.592
#> sample_34     1  0.4690      0.865 0.900 0.100
#> sample_35     1  0.4939      0.858 0.892 0.108
#> sample_36     1  0.0000      0.889 1.000 0.000
#> sample_37     1  0.0000      0.889 1.000 0.000
#> sample_38     1  0.3274      0.884 0.940 0.060
#> sample_28     1  0.1843      0.891 0.972 0.028
#> sample_29     2  0.8555      0.513 0.280 0.720
#> sample_30     1  0.0000      0.889 1.000 0.000
#> sample_31     1  0.2043      0.893 0.968 0.032
#> sample_32     1  0.2948      0.886 0.948 0.052
#> sample_33     1  0.0000      0.889 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> MAD:hclust 69   5.21e-14 2
#> MAD:hclust 53   1.00e-10 3
#> MAD:hclust 48   7.21e-09 4
#> MAD:hclust 51   1.57e-09 5
#> MAD:hclust 52   4.99e-09 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:kmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "kmeans"]
# you can also extract it by
# res = res_list["MAD:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'MAD' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.623           0.805       0.904         0.4828 0.540   0.540
#> 3 3 0.721           0.848       0.896         0.3612 0.786   0.605
#> 4 4 0.646           0.655       0.796         0.1188 0.938   0.817
#> 5 5 0.683           0.605       0.761         0.0716 0.879   0.603
#> 6 6 0.745           0.692       0.795         0.0480 0.917   0.629

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.9866      0.452 0.432 0.568
#> sample_40     2  0.9922      0.421 0.448 0.552
#> sample_42     2  0.0000      0.840 0.000 1.000
#> sample_47     2  0.0000      0.840 0.000 1.000
#> sample_48     2  0.0000      0.840 0.000 1.000
#> sample_49     2  0.9933      0.412 0.452 0.548
#> sample_41     2  0.0000      0.840 0.000 1.000
#> sample_43     2  0.0000      0.840 0.000 1.000
#> sample_44     2  0.0000      0.840 0.000 1.000
#> sample_45     2  0.0000      0.840 0.000 1.000
#> sample_46     2  0.0000      0.840 0.000 1.000
#> sample_70     2  0.1184      0.834 0.016 0.984
#> sample_71     2  0.0000      0.840 0.000 1.000
#> sample_72     2  0.0000      0.840 0.000 1.000
#> sample_68     2  0.0000      0.840 0.000 1.000
#> sample_69     2  0.0000      0.840 0.000 1.000
#> sample_67     2  0.9896      0.176 0.440 0.560
#> sample_55     2  0.9686      0.515 0.396 0.604
#> sample_56     2  0.9732      0.502 0.404 0.596
#> sample_59     2  0.0376      0.839 0.004 0.996
#> sample_52     1  0.0376      0.988 0.996 0.004
#> sample_53     1  0.0376      0.988 0.996 0.004
#> sample_51     1  0.0376      0.988 0.996 0.004
#> sample_50     1  0.0376      0.988 0.996 0.004
#> sample_54     1  0.6048      0.806 0.852 0.148
#> sample_57     1  0.0376      0.988 0.996 0.004
#> sample_58     1  0.0376      0.988 0.996 0.004
#> sample_60     1  0.0938      0.980 0.988 0.012
#> sample_61     1  0.0376      0.988 0.996 0.004
#> sample_65     1  0.0376      0.988 0.996 0.004
#> sample_66     2  0.0000      0.840 0.000 1.000
#> sample_63     1  0.0376      0.988 0.996 0.004
#> sample_64     1  0.0376      0.988 0.996 0.004
#> sample_62     1  0.0376      0.988 0.996 0.004
#> sample_1      2  0.0672      0.838 0.008 0.992
#> sample_2      2  0.6887      0.676 0.184 0.816
#> sample_3      2  0.9552      0.549 0.376 0.624
#> sample_4      2  0.0672      0.838 0.008 0.992
#> sample_5      2  0.0000      0.840 0.000 1.000
#> sample_6      2  0.9580      0.544 0.380 0.620
#> sample_7      2  0.9686      0.515 0.396 0.604
#> sample_8      2  0.9933      0.412 0.452 0.548
#> sample_9      2  0.0376      0.838 0.004 0.996
#> sample_10     2  0.9580      0.544 0.380 0.620
#> sample_11     2  0.0376      0.838 0.004 0.996
#> sample_12     1  0.0376      0.988 0.996 0.004
#> sample_13     2  0.0000      0.840 0.000 1.000
#> sample_14     2  0.0376      0.838 0.004 0.996
#> sample_15     2  0.0000      0.840 0.000 1.000
#> sample_16     2  0.0000      0.840 0.000 1.000
#> sample_17     2  0.0000      0.840 0.000 1.000
#> sample_18     2  0.6973      0.727 0.188 0.812
#> sample_19     2  0.0000      0.840 0.000 1.000
#> sample_20     2  0.0000      0.840 0.000 1.000
#> sample_21     2  0.0000      0.840 0.000 1.000
#> sample_22     2  0.9944      0.403 0.456 0.544
#> sample_23     2  0.9580      0.544 0.380 0.620
#> sample_24     2  0.0000      0.840 0.000 1.000
#> sample_25     2  0.9248      0.588 0.340 0.660
#> sample_26     2  0.0000      0.840 0.000 1.000
#> sample_27     2  0.9933      0.412 0.452 0.548
#> sample_34     1  0.0376      0.988 0.996 0.004
#> sample_35     1  0.0376      0.988 0.996 0.004
#> sample_36     1  0.0376      0.988 0.996 0.004
#> sample_37     1  0.0376      0.988 0.996 0.004
#> sample_38     1  0.0376      0.988 0.996 0.004
#> sample_28     1  0.0376      0.988 0.996 0.004
#> sample_29     1  0.4298      0.887 0.912 0.088
#> sample_30     1  0.0376      0.988 0.996 0.004
#> sample_31     1  0.0376      0.988 0.996 0.004
#> sample_32     1  0.0376      0.988 0.996 0.004
#> sample_33     1  0.0376      0.988 0.996 0.004

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> MAD:kmeans 65   3.43e-13 2
#> MAD:kmeans 68   1.06e-13 3
#> MAD:kmeans 60   4.20e-12 4
#> MAD:kmeans 58   5.31e-11 5
#> MAD:kmeans 60   9.09e-11 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:skmeans

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "skmeans"]
# you can also extract it by
# res = res_list["MAD:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'MAD' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.640           0.728       0.884         0.4967 0.493   0.493
#> 3 3 0.866           0.872       0.946         0.3456 0.734   0.511
#> 4 4 0.735           0.726       0.847         0.1017 0.886   0.681
#> 5 5 0.679           0.465       0.693         0.0684 0.887   0.614
#> 6 6 0.708           0.641       0.797         0.0465 0.888   0.536

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  0.0376      0.520 0.996 0.004
#> sample_40     1  0.0000      0.526 1.000 0.000
#> sample_42     2  0.9710      0.906 0.400 0.600
#> sample_47     2  0.9710      0.906 0.400 0.600
#> sample_48     2  0.9710      0.906 0.400 0.600
#> sample_49     1  0.0000      0.526 1.000 0.000
#> sample_41     2  0.9710      0.906 0.400 0.600
#> sample_43     2  0.9710      0.906 0.400 0.600
#> sample_44     2  0.9710      0.906 0.400 0.600
#> sample_45     2  0.9710      0.906 0.400 0.600
#> sample_46     2  0.9710      0.906 0.400 0.600
#> sample_70     2  0.9710      0.906 0.400 0.600
#> sample_71     2  0.9686      0.902 0.396 0.604
#> sample_72     2  0.9710      0.906 0.400 0.600
#> sample_68     2  0.9710      0.906 0.400 0.600
#> sample_69     2  0.9710      0.906 0.400 0.600
#> sample_67     2  0.0672      0.296 0.008 0.992
#> sample_55     1  0.0000      0.526 1.000 0.000
#> sample_56     1  0.0672      0.514 0.992 0.008
#> sample_59     2  0.9710      0.906 0.400 0.600
#> sample_52     1  0.9710      0.810 0.600 0.400
#> sample_53     1  0.9710      0.810 0.600 0.400
#> sample_51     1  0.9710      0.810 0.600 0.400
#> sample_50     1  0.9710      0.810 0.600 0.400
#> sample_54     2  0.9248     -0.527 0.340 0.660
#> sample_57     1  0.9710      0.810 0.600 0.400
#> sample_58     1  0.9710      0.810 0.600 0.400
#> sample_60     1  0.9710      0.810 0.600 0.400
#> sample_61     1  0.9710      0.810 0.600 0.400
#> sample_65     1  0.9710      0.810 0.600 0.400
#> sample_66     2  0.6712      0.621 0.176 0.824
#> sample_63     1  0.9710      0.810 0.600 0.400
#> sample_64     1  0.9710      0.810 0.600 0.400
#> sample_62     1  0.9710      0.810 0.600 0.400
#> sample_1      2  0.9710      0.906 0.400 0.600
#> sample_2      2  0.0672      0.327 0.008 0.992
#> sample_3      1  0.0938      0.507 0.988 0.012
#> sample_4      2  0.9710      0.906 0.400 0.600
#> sample_5      2  0.9710      0.906 0.400 0.600
#> sample_6      1  0.0938      0.507 0.988 0.012
#> sample_7      1  0.0672      0.514 0.992 0.008
#> sample_8      1  0.0000      0.526 1.000 0.000
#> sample_9      2  0.9710      0.906 0.400 0.600
#> sample_10     1  0.4022      0.581 0.920 0.080
#> sample_11     2  0.9710      0.906 0.400 0.600
#> sample_12     1  0.9710      0.810 0.600 0.400
#> sample_13     2  0.9710      0.906 0.400 0.600
#> sample_14     2  0.9710      0.906 0.400 0.600
#> sample_15     2  0.9710      0.906 0.400 0.600
#> sample_16     2  0.9710      0.906 0.400 0.600
#> sample_17     2  0.9710      0.906 0.400 0.600
#> sample_18     2  0.9944      0.842 0.456 0.544
#> sample_19     2  0.9710      0.906 0.400 0.600
#> sample_20     2  0.9710      0.906 0.400 0.600
#> sample_21     2  0.9710      0.906 0.400 0.600
#> sample_22     1  0.5842      0.652 0.860 0.140
#> sample_23     1  0.0938      0.507 0.988 0.012
#> sample_24     2  0.9710      0.906 0.400 0.600
#> sample_25     1  0.8499     -0.328 0.724 0.276
#> sample_26     2  0.9710      0.906 0.400 0.600
#> sample_27     1  0.0000      0.526 1.000 0.000
#> sample_34     1  0.9710      0.810 0.600 0.400
#> sample_35     1  0.9710      0.810 0.600 0.400
#> sample_36     1  0.9710      0.810 0.600 0.400
#> sample_37     1  0.9710      0.810 0.600 0.400
#> sample_38     1  0.9710      0.810 0.600 0.400
#> sample_28     1  0.9710      0.810 0.600 0.400
#> sample_29     2  0.9795     -0.641 0.416 0.584
#> sample_30     1  0.9710      0.810 0.600 0.400
#> sample_31     1  0.9710      0.810 0.600 0.400
#> sample_32     1  0.9710      0.810 0.600 0.400
#> sample_33     1  0.9710      0.810 0.600 0.400

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>              n ALL.AML(p) k
#> MAD:skmeans 67   2.39e-06 2
#> MAD:skmeans 67   1.71e-13 3
#> MAD:skmeans 60   4.20e-12 4
#> MAD:skmeans 44   1.51e-09 5
#> MAD:skmeans 56   5.18e-10 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "pam"]
# you can also extract it by
# res = res_list["MAD:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'MAD' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.651           0.824       0.925         0.4957 0.499   0.499
#> 3 3 0.448           0.417       0.706         0.3170 0.716   0.503
#> 4 4 0.644           0.662       0.813         0.1232 0.824   0.565
#> 5 5 0.661           0.486       0.746         0.0759 0.861   0.547
#> 6 6 0.731           0.602       0.801         0.0519 0.852   0.419

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  0.9427      0.464 0.640 0.360
#> sample_40     1  0.5629      0.824 0.868 0.132
#> sample_42     2  0.2603      0.881 0.044 0.956
#> sample_47     2  0.0000      0.906 0.000 1.000
#> sample_48     2  0.0000      0.906 0.000 1.000
#> sample_49     1  0.0938      0.918 0.988 0.012
#> sample_41     2  0.0000      0.906 0.000 1.000
#> sample_43     2  0.0000      0.906 0.000 1.000
#> sample_44     2  0.0000      0.906 0.000 1.000
#> sample_45     2  0.0000      0.906 0.000 1.000
#> sample_46     2  0.0000      0.906 0.000 1.000
#> sample_70     1  0.9491      0.445 0.632 0.368
#> sample_71     2  0.1414      0.896 0.020 0.980
#> sample_72     2  0.0672      0.903 0.008 0.992
#> sample_68     2  0.0000      0.906 0.000 1.000
#> sample_69     2  0.0000      0.906 0.000 1.000
#> sample_67     2  0.9993      0.126 0.484 0.516
#> sample_55     1  0.2236      0.907 0.964 0.036
#> sample_56     1  0.9833      0.296 0.576 0.424
#> sample_59     2  0.7219      0.723 0.200 0.800
#> sample_52     1  0.0000      0.922 1.000 0.000
#> sample_53     1  0.0000      0.922 1.000 0.000
#> sample_51     1  0.0000      0.922 1.000 0.000
#> sample_50     1  0.0000      0.922 1.000 0.000
#> sample_54     1  0.4562      0.852 0.904 0.096
#> sample_57     1  0.0000      0.922 1.000 0.000
#> sample_58     1  0.0000      0.922 1.000 0.000
#> sample_60     1  0.1633      0.913 0.976 0.024
#> sample_61     1  0.0000      0.922 1.000 0.000
#> sample_65     1  0.0000      0.922 1.000 0.000
#> sample_66     2  0.9248      0.483 0.340 0.660
#> sample_63     1  0.0000      0.922 1.000 0.000
#> sample_64     1  0.0000      0.922 1.000 0.000
#> sample_62     1  0.0672      0.919 0.992 0.008
#> sample_1      2  0.0376      0.904 0.004 0.996
#> sample_2      2  0.9866      0.291 0.432 0.568
#> sample_3      1  0.2236      0.906 0.964 0.036
#> sample_4      2  0.9881      0.207 0.436 0.564
#> sample_5      2  0.0000      0.906 0.000 1.000
#> sample_6      1  0.0938      0.918 0.988 0.012
#> sample_7      1  0.7674      0.713 0.776 0.224
#> sample_8      1  0.7674      0.711 0.776 0.224
#> sample_9      2  0.9427      0.435 0.360 0.640
#> sample_10     1  0.0938      0.918 0.988 0.012
#> sample_11     2  0.4690      0.835 0.100 0.900
#> sample_12     1  0.0000      0.922 1.000 0.000
#> sample_13     2  0.0000      0.906 0.000 1.000
#> sample_14     2  0.0000      0.906 0.000 1.000
#> sample_15     2  0.0000      0.906 0.000 1.000
#> sample_16     2  0.0000      0.906 0.000 1.000
#> sample_17     2  0.0000      0.906 0.000 1.000
#> sample_18     2  0.7453      0.710 0.212 0.788
#> sample_19     2  0.0000      0.906 0.000 1.000
#> sample_20     2  0.0000      0.906 0.000 1.000
#> sample_21     2  0.0000      0.906 0.000 1.000
#> sample_22     1  0.6973      0.759 0.812 0.188
#> sample_23     1  0.5408      0.832 0.876 0.124
#> sample_24     2  0.0000      0.906 0.000 1.000
#> sample_25     1  0.9850      0.286 0.572 0.428
#> sample_26     2  0.0672      0.903 0.008 0.992
#> sample_27     1  0.0938      0.918 0.988 0.012
#> sample_34     1  0.0000      0.922 1.000 0.000
#> sample_35     1  0.0000      0.922 1.000 0.000
#> sample_36     1  0.0000      0.922 1.000 0.000
#> sample_37     1  0.0000      0.922 1.000 0.000
#> sample_38     1  0.0000      0.922 1.000 0.000
#> sample_28     1  0.0000      0.922 1.000 0.000
#> sample_29     1  0.0376      0.921 0.996 0.004
#> sample_30     1  0.0000      0.922 1.000 0.000
#> sample_31     1  0.0000      0.922 1.000 0.000
#> sample_32     1  0.0000      0.922 1.000 0.000
#> sample_33     1  0.0000      0.922 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n ALL.AML(p) k
#> MAD:pam 63   2.89e-07 2
#> MAD:pam 24         NA 3
#> MAD:pam 58   6.37e-11 4
#> MAD:pam 37   4.82e-07 5
#> MAD:pam 49   2.89e-08 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "mclust"]
# you can also extract it by
# res = res_list["MAD:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'MAD' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.824           0.927       0.932         0.4416 0.532   0.532
#> 3 3 0.562           0.688       0.821         0.4007 0.786   0.606
#> 4 4 0.550           0.636       0.787         0.1485 0.813   0.535
#> 5 5 0.719           0.729       0.853         0.0984 0.885   0.615
#> 6 6 0.672           0.573       0.713         0.0392 0.886   0.543

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.1633      0.959 0.024 0.976
#> sample_40     2  0.1633      0.959 0.024 0.976
#> sample_42     2  0.1633      0.959 0.024 0.976
#> sample_47     2  0.1184      0.942 0.016 0.984
#> sample_48     2  0.3733      0.907 0.072 0.928
#> sample_49     2  0.1633      0.959 0.024 0.976
#> sample_41     2  0.3733      0.907 0.072 0.928
#> sample_43     2  0.0938      0.956 0.012 0.988
#> sample_44     2  0.0672      0.954 0.008 0.992
#> sample_45     2  0.0672      0.951 0.008 0.992
#> sample_46     2  0.1184      0.957 0.016 0.984
#> sample_70     2  0.1633      0.959 0.024 0.976
#> sample_71     2  0.1633      0.959 0.024 0.976
#> sample_72     2  0.1633      0.959 0.024 0.976
#> sample_68     2  0.3733      0.907 0.072 0.928
#> sample_69     2  0.3733      0.907 0.072 0.928
#> sample_67     1  0.8661      0.738 0.712 0.288
#> sample_55     2  0.1633      0.959 0.024 0.976
#> sample_56     2  0.1633      0.959 0.024 0.976
#> sample_59     2  0.1633      0.959 0.024 0.976
#> sample_52     1  0.3733      0.950 0.928 0.072
#> sample_53     1  0.3733      0.950 0.928 0.072
#> sample_51     1  0.3733      0.950 0.928 0.072
#> sample_50     1  0.3733      0.950 0.928 0.072
#> sample_54     1  0.8081      0.806 0.752 0.248
#> sample_57     1  0.6247      0.902 0.844 0.156
#> sample_58     1  0.3733      0.950 0.928 0.072
#> sample_60     1  0.8144      0.800 0.748 0.252
#> sample_61     1  0.3733      0.950 0.928 0.072
#> sample_65     1  0.3733      0.950 0.928 0.072
#> sample_66     2  0.6531      0.784 0.168 0.832
#> sample_63     1  0.3733      0.950 0.928 0.072
#> sample_64     1  0.6623      0.889 0.828 0.172
#> sample_62     1  0.4022      0.946 0.920 0.080
#> sample_1      2  0.1633      0.959 0.024 0.976
#> sample_2      2  0.8763      0.527 0.296 0.704
#> sample_3      2  0.1633      0.959 0.024 0.976
#> sample_4      2  0.1633      0.959 0.024 0.976
#> sample_5      2  0.3733      0.907 0.072 0.928
#> sample_6      2  0.1633      0.959 0.024 0.976
#> sample_7      2  0.1633      0.959 0.024 0.976
#> sample_8      2  0.1633      0.959 0.024 0.976
#> sample_9      2  0.1633      0.959 0.024 0.976
#> sample_10     2  0.1633      0.959 0.024 0.976
#> sample_11     2  0.1633      0.959 0.024 0.976
#> sample_12     1  0.8081      0.804 0.752 0.248
#> sample_13     2  0.3733      0.907 0.072 0.928
#> sample_14     2  0.2043      0.951 0.032 0.968
#> sample_15     2  0.3733      0.907 0.072 0.928
#> sample_16     2  0.0376      0.953 0.004 0.996
#> sample_17     2  0.3431      0.923 0.064 0.936
#> sample_18     2  0.1633      0.959 0.024 0.976
#> sample_19     2  0.0376      0.953 0.004 0.996
#> sample_20     2  0.3733      0.907 0.072 0.928
#> sample_21     2  0.3733      0.907 0.072 0.928
#> sample_22     2  0.1633      0.959 0.024 0.976
#> sample_23     2  0.1633      0.959 0.024 0.976
#> sample_24     2  0.3733      0.907 0.072 0.928
#> sample_25     2  0.1633      0.959 0.024 0.976
#> sample_26     2  0.1633      0.959 0.024 0.976
#> sample_27     2  0.1633      0.959 0.024 0.976
#> sample_34     1  0.3733      0.950 0.928 0.072
#> sample_35     1  0.6887      0.878 0.816 0.184
#> sample_36     1  0.3733      0.950 0.928 0.072
#> sample_37     1  0.3733      0.950 0.928 0.072
#> sample_38     1  0.3733      0.950 0.928 0.072
#> sample_28     1  0.3733      0.950 0.928 0.072
#> sample_29     1  0.5946      0.910 0.856 0.144
#> sample_30     1  0.3733      0.950 0.928 0.072
#> sample_31     1  0.3733      0.950 0.928 0.072
#> sample_32     1  0.3733      0.950 0.928 0.072
#> sample_33     1  0.3733      0.950 0.928 0.072

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> MAD:mclust 72   8.75e-14 2
#> MAD:mclust 61   4.15e-13 3
#> MAD:mclust 63   1.06e-12 4
#> MAD:mclust 63   4.79e-12 5
#> MAD:mclust 47   9.40e-09 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


MAD:NMF

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["MAD", "NMF"]
# you can also extract it by
# res = res_list["MAD:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'MAD' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk MAD-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk MAD-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.680           0.811       0.925         0.4899 0.507   0.507
#> 3 3 0.484           0.709       0.818         0.3511 0.787   0.594
#> 4 4 0.493           0.522       0.720         0.1228 0.865   0.627
#> 5 5 0.598           0.532       0.740         0.0694 0.876   0.575
#> 6 6 0.611           0.438       0.670         0.0424 0.885   0.530

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     2  0.9460    0.47355 0.364 0.636
#> sample_40     2  0.9323    0.50751 0.348 0.652
#> sample_42     2  0.0000    0.91217 0.000 1.000
#> sample_47     2  0.0000    0.91217 0.000 1.000
#> sample_48     2  0.0000    0.91217 0.000 1.000
#> sample_49     2  0.9944    0.22386 0.456 0.544
#> sample_41     2  0.0000    0.91217 0.000 1.000
#> sample_43     2  0.0000    0.91217 0.000 1.000
#> sample_44     2  0.0000    0.91217 0.000 1.000
#> sample_45     2  0.0000    0.91217 0.000 1.000
#> sample_46     2  0.0000    0.91217 0.000 1.000
#> sample_70     2  0.0376    0.91020 0.004 0.996
#> sample_71     2  0.0938    0.90644 0.012 0.988
#> sample_72     2  0.0000    0.91217 0.000 1.000
#> sample_68     2  0.0000    0.91217 0.000 1.000
#> sample_69     2  0.0000    0.91217 0.000 1.000
#> sample_67     1  0.7139    0.71548 0.804 0.196
#> sample_55     2  0.6148    0.79467 0.152 0.848
#> sample_56     2  0.5629    0.81495 0.132 0.868
#> sample_59     2  0.0000    0.91217 0.000 1.000
#> sample_52     1  0.0000    0.91389 1.000 0.000
#> sample_53     1  0.0000    0.91389 1.000 0.000
#> sample_51     1  0.0000    0.91389 1.000 0.000
#> sample_50     1  0.0000    0.91389 1.000 0.000
#> sample_54     1  0.8016    0.64617 0.756 0.244
#> sample_57     1  0.0000    0.91389 1.000 0.000
#> sample_58     1  0.0000    0.91389 1.000 0.000
#> sample_60     1  0.0938    0.90467 0.988 0.012
#> sample_61     1  0.0000    0.91389 1.000 0.000
#> sample_65     1  0.0000    0.91389 1.000 0.000
#> sample_66     2  0.4431    0.83337 0.092 0.908
#> sample_63     1  0.0000    0.91389 1.000 0.000
#> sample_64     1  0.0000    0.91389 1.000 0.000
#> sample_62     1  0.0000    0.91389 1.000 0.000
#> sample_1      2  0.0000    0.91217 0.000 1.000
#> sample_2      1  0.9833    0.27353 0.576 0.424
#> sample_3      2  0.3584    0.86847 0.068 0.932
#> sample_4      2  0.0000    0.91217 0.000 1.000
#> sample_5      2  0.0000    0.91217 0.000 1.000
#> sample_6      2  0.9129    0.54748 0.328 0.672
#> sample_7      2  0.5294    0.82590 0.120 0.880
#> sample_8      1  0.9977   -0.00368 0.528 0.472
#> sample_9      2  0.0000    0.91217 0.000 1.000
#> sample_10     2  0.9996    0.09922 0.488 0.512
#> sample_11     2  0.0000    0.91217 0.000 1.000
#> sample_12     1  0.0000    0.91389 1.000 0.000
#> sample_13     2  0.0000    0.91217 0.000 1.000
#> sample_14     2  0.0000    0.91217 0.000 1.000
#> sample_15     2  0.0000    0.91217 0.000 1.000
#> sample_16     2  0.0000    0.91217 0.000 1.000
#> sample_17     2  0.0000    0.91217 0.000 1.000
#> sample_18     2  0.1414    0.90124 0.020 0.980
#> sample_19     2  0.0000    0.91217 0.000 1.000
#> sample_20     2  0.0000    0.91217 0.000 1.000
#> sample_21     2  0.0000    0.91217 0.000 1.000
#> sample_22     1  0.9491    0.34331 0.632 0.368
#> sample_23     2  0.8909    0.58335 0.308 0.692
#> sample_24     2  0.0000    0.91217 0.000 1.000
#> sample_25     2  0.9044    0.56124 0.320 0.680
#> sample_26     2  0.0000    0.91217 0.000 1.000
#> sample_27     1  0.9970    0.01229 0.532 0.468
#> sample_34     1  0.0000    0.91389 1.000 0.000
#> sample_35     1  0.0000    0.91389 1.000 0.000
#> sample_36     1  0.0000    0.91389 1.000 0.000
#> sample_37     1  0.0000    0.91389 1.000 0.000
#> sample_38     1  0.0000    0.91389 1.000 0.000
#> sample_28     1  0.0000    0.91389 1.000 0.000
#> sample_29     1  0.0000    0.91389 1.000 0.000
#> sample_30     1  0.0000    0.91389 1.000 0.000
#> sample_31     1  0.0000    0.91389 1.000 0.000
#> sample_32     1  0.0000    0.91389 1.000 0.000
#> sample_33     1  0.0000    0.91389 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-MAD-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-MAD-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-MAD-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-MAD-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk MAD-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-MAD-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk MAD-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n ALL.AML(p) k
#> MAD:NMF 65   2.13e-12 2
#> MAD:NMF 67   8.26e-12 3
#> MAD:NMF 48   3.78e-11 4
#> MAD:NMF 47   2.27e-08 5
#> MAD:NMF 38   2.02e-07 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:hclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "hclust"]
# you can also extract it by
# res = res_list["ATC:hclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'ATC' method.
#>   Subgroups are detected by 'hclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-hclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-hclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.574           0.813       0.911         0.4864 0.495   0.495
#> 3 3 0.479           0.645       0.709         0.2767 0.842   0.698
#> 4 4 0.495           0.470       0.695         0.1376 0.869   0.676
#> 5 5 0.601           0.663       0.797         0.0839 0.832   0.492
#> 6 6 0.622           0.687       0.782         0.0352 0.975   0.882

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  0.0000      0.856 1.000 0.000
#> sample_40     1  0.0376      0.855 0.996 0.004
#> sample_42     2  0.0672      0.939 0.008 0.992
#> sample_47     2  0.0000      0.940 0.000 1.000
#> sample_48     2  0.0000      0.940 0.000 1.000
#> sample_49     1  0.0000      0.856 1.000 0.000
#> sample_41     2  0.0000      0.940 0.000 1.000
#> sample_43     2  0.0672      0.939 0.008 0.992
#> sample_44     2  0.0000      0.940 0.000 1.000
#> sample_45     2  0.0000      0.940 0.000 1.000
#> sample_46     2  0.0672      0.939 0.008 0.992
#> sample_70     2  0.1184      0.935 0.016 0.984
#> sample_71     2  0.0938      0.937 0.012 0.988
#> sample_72     2  0.0938      0.937 0.012 0.988
#> sample_68     2  0.0000      0.940 0.000 1.000
#> sample_69     2  0.0000      0.940 0.000 1.000
#> sample_67     2  0.0938      0.937 0.012 0.988
#> sample_55     1  0.9963      0.307 0.536 0.464
#> sample_56     1  0.0000      0.856 1.000 0.000
#> sample_59     2  0.7139      0.709 0.196 0.804
#> sample_52     1  0.9491      0.540 0.632 0.368
#> sample_53     1  0.0000      0.856 1.000 0.000
#> sample_51     1  0.0000      0.856 1.000 0.000
#> sample_50     1  0.0000      0.856 1.000 0.000
#> sample_54     2  0.9129      0.420 0.328 0.672
#> sample_57     1  1.0000      0.193 0.500 0.500
#> sample_58     1  0.0000      0.856 1.000 0.000
#> sample_60     2  0.9358      0.351 0.352 0.648
#> sample_61     1  0.0000      0.856 1.000 0.000
#> sample_65     1  0.2043      0.851 0.968 0.032
#> sample_66     2  0.0000      0.940 0.000 1.000
#> sample_63     1  0.9491      0.540 0.632 0.368
#> sample_64     1  0.0000      0.856 1.000 0.000
#> sample_62     1  0.9491      0.540 0.632 0.368
#> sample_1      1  0.7815      0.748 0.768 0.232
#> sample_2      2  0.0376      0.940 0.004 0.996
#> sample_3      1  0.7950      0.739 0.760 0.240
#> sample_4      1  0.7376      0.766 0.792 0.208
#> sample_5      2  0.0000      0.940 0.000 1.000
#> sample_6      1  0.7950      0.739 0.760 0.240
#> sample_7      1  0.7376      0.766 0.792 0.208
#> sample_8      1  0.0000      0.856 1.000 0.000
#> sample_9      2  0.0376      0.940 0.004 0.996
#> sample_10     1  0.9833      0.419 0.576 0.424
#> sample_11     2  0.0376      0.940 0.004 0.996
#> sample_12     1  0.7299      0.771 0.796 0.204
#> sample_13     2  0.0000      0.940 0.000 1.000
#> sample_14     2  0.0000      0.940 0.000 1.000
#> sample_15     2  0.0000      0.940 0.000 1.000
#> sample_16     2  0.2778      0.907 0.048 0.952
#> sample_17     2  0.1184      0.934 0.016 0.984
#> sample_18     2  0.9491      0.303 0.368 0.632
#> sample_19     2  0.2778      0.907 0.048 0.952
#> sample_20     2  0.0000      0.940 0.000 1.000
#> sample_21     2  0.0000      0.940 0.000 1.000
#> sample_22     1  0.0672      0.855 0.992 0.008
#> sample_23     1  0.7950      0.739 0.760 0.240
#> sample_24     2  0.0000      0.940 0.000 1.000
#> sample_25     1  0.7528      0.763 0.784 0.216
#> sample_26     2  0.6247      0.774 0.156 0.844
#> sample_27     1  0.0000      0.856 1.000 0.000
#> sample_34     1  0.0000      0.856 1.000 0.000
#> sample_35     1  0.0000      0.856 1.000 0.000
#> sample_36     1  0.6148      0.785 0.848 0.152
#> sample_37     1  0.0000      0.856 1.000 0.000
#> sample_38     1  0.0000      0.856 1.000 0.000
#> sample_28     1  0.2948      0.844 0.948 0.052
#> sample_29     2  0.0376      0.940 0.004 0.996
#> sample_30     1  0.8555      0.680 0.720 0.280
#> sample_31     1  0.1184      0.854 0.984 0.016
#> sample_32     1  0.0938      0.855 0.988 0.012
#> sample_33     1  0.0000      0.856 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-hclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-hclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-hclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-hclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-hclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-hclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-hclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> ATC:hclust 66   4.16e-05 2
#> ATC:hclust 62   3.04e-03 3
#> ATC:hclust 30   8.73e-01 4
#> ATC:hclust 52   8.51e-06 5
#> ATC:hclust 52   1.57e-06 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:kmeans**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "kmeans"]
# you can also extract it by
# res = res_list["ATC:kmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'ATC' method.
#>   Subgroups are detected by 'kmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-kmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-kmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 1.000           0.982       0.992         0.5025 0.499   0.499
#> 3 3 0.553           0.621       0.800         0.3021 0.777   0.575
#> 4 4 0.620           0.644       0.808         0.1179 0.818   0.526
#> 5 5 0.689           0.624       0.780         0.0746 0.857   0.530
#> 6 6 0.712           0.656       0.769         0.0471 0.926   0.674

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  0.0000      0.985 1.000 0.000
#> sample_40     1  0.0000      0.985 1.000 0.000
#> sample_42     2  0.0000      1.000 0.000 1.000
#> sample_47     2  0.0000      1.000 0.000 1.000
#> sample_48     2  0.0000      1.000 0.000 1.000
#> sample_49     1  0.0000      0.985 1.000 0.000
#> sample_41     2  0.0000      1.000 0.000 1.000
#> sample_43     2  0.0000      1.000 0.000 1.000
#> sample_44     2  0.0000      1.000 0.000 1.000
#> sample_45     2  0.0000      1.000 0.000 1.000
#> sample_46     2  0.0000      1.000 0.000 1.000
#> sample_70     2  0.0000      1.000 0.000 1.000
#> sample_71     2  0.0376      0.996 0.004 0.996
#> sample_72     2  0.0000      1.000 0.000 1.000
#> sample_68     2  0.0000      1.000 0.000 1.000
#> sample_69     2  0.0000      1.000 0.000 1.000
#> sample_67     2  0.0000      1.000 0.000 1.000
#> sample_55     1  0.0000      0.985 1.000 0.000
#> sample_56     1  0.0000      0.985 1.000 0.000
#> sample_59     2  0.0376      0.996 0.004 0.996
#> sample_52     1  0.0000      0.985 1.000 0.000
#> sample_53     1  0.0000      0.985 1.000 0.000
#> sample_51     1  0.0000      0.985 1.000 0.000
#> sample_50     1  0.0000      0.985 1.000 0.000
#> sample_54     2  0.0000      1.000 0.000 1.000
#> sample_57     1  0.0000      0.985 1.000 0.000
#> sample_58     1  0.0000      0.985 1.000 0.000
#> sample_60     1  0.6887      0.783 0.816 0.184
#> sample_61     1  0.0000      0.985 1.000 0.000
#> sample_65     1  0.0000      0.985 1.000 0.000
#> sample_66     2  0.0000      1.000 0.000 1.000
#> sample_63     1  0.0000      0.985 1.000 0.000
#> sample_64     1  0.0000      0.985 1.000 0.000
#> sample_62     1  0.0000      0.985 1.000 0.000
#> sample_1      1  0.0000      0.985 1.000 0.000
#> sample_2      2  0.0000      1.000 0.000 1.000
#> sample_3      1  0.0000      0.985 1.000 0.000
#> sample_4      1  0.0000      0.985 1.000 0.000
#> sample_5      2  0.0000      1.000 0.000 1.000
#> sample_6      1  0.0000      0.985 1.000 0.000
#> sample_7      1  0.0000      0.985 1.000 0.000
#> sample_8      1  0.0000      0.985 1.000 0.000
#> sample_9      2  0.0000      1.000 0.000 1.000
#> sample_10     1  0.8267      0.664 0.740 0.260
#> sample_11     2  0.0000      1.000 0.000 1.000
#> sample_12     1  0.0000      0.985 1.000 0.000
#> sample_13     2  0.0000      1.000 0.000 1.000
#> sample_14     2  0.0000      1.000 0.000 1.000
#> sample_15     2  0.0000      1.000 0.000 1.000
#> sample_16     2  0.0000      1.000 0.000 1.000
#> sample_17     2  0.0000      1.000 0.000 1.000
#> sample_18     1  0.6247      0.820 0.844 0.156
#> sample_19     2  0.0000      1.000 0.000 1.000
#> sample_20     2  0.0000      1.000 0.000 1.000
#> sample_21     2  0.0000      1.000 0.000 1.000
#> sample_22     1  0.0000      0.985 1.000 0.000
#> sample_23     1  0.0000      0.985 1.000 0.000
#> sample_24     2  0.0000      1.000 0.000 1.000
#> sample_25     1  0.0000      0.985 1.000 0.000
#> sample_26     2  0.0376      0.996 0.004 0.996
#> sample_27     1  0.0000      0.985 1.000 0.000
#> sample_34     1  0.0000      0.985 1.000 0.000
#> sample_35     1  0.0000      0.985 1.000 0.000
#> sample_36     1  0.0000      0.985 1.000 0.000
#> sample_37     1  0.0000      0.985 1.000 0.000
#> sample_38     1  0.0000      0.985 1.000 0.000
#> sample_28     1  0.0000      0.985 1.000 0.000
#> sample_29     2  0.0000      1.000 0.000 1.000
#> sample_30     1  0.0000      0.985 1.000 0.000
#> sample_31     1  0.0000      0.985 1.000 0.000
#> sample_32     1  0.0000      0.985 1.000 0.000
#> sample_33     1  0.0000      0.985 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-kmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-kmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-kmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-kmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-kmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-kmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-kmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> ATC:kmeans 72   1.50e-04 2
#> ATC:kmeans 60   2.70e-11 3
#> ATC:kmeans 56   7.62e-09 4
#> ATC:kmeans 56   9.68e-08 5
#> ATC:kmeans 62   8.55e-08 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:skmeans**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "skmeans"]
# you can also extract it by
# res = res_list["ATC:skmeans"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'ATC' method.
#>   Subgroups are detected by 'skmeans' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 3.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-skmeans-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-skmeans-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 1.000           0.977       0.991         0.5052 0.496   0.496
#> 3 3 0.961           0.932       0.973         0.2923 0.823   0.652
#> 4 4 0.789           0.740       0.857         0.1018 0.925   0.789
#> 5 5 0.706           0.678       0.825         0.0633 0.941   0.804
#> 6 6 0.719           0.590       0.786         0.0431 0.914   0.681

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 3
#> attr(,"optional")
#> [1] 2

There is also optional best \(k\) = 2 that is worth to check.

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  0.0000      0.982 1.000 0.000
#> sample_40     1  0.0000      0.982 1.000 0.000
#> sample_42     2  0.0000      1.000 0.000 1.000
#> sample_47     2  0.0000      1.000 0.000 1.000
#> sample_48     2  0.0000      1.000 0.000 1.000
#> sample_49     1  0.0000      0.982 1.000 0.000
#> sample_41     2  0.0000      1.000 0.000 1.000
#> sample_43     2  0.0000      1.000 0.000 1.000
#> sample_44     2  0.0000      1.000 0.000 1.000
#> sample_45     2  0.0000      1.000 0.000 1.000
#> sample_46     2  0.0000      1.000 0.000 1.000
#> sample_70     2  0.0000      1.000 0.000 1.000
#> sample_71     2  0.0000      1.000 0.000 1.000
#> sample_72     2  0.0000      1.000 0.000 1.000
#> sample_68     2  0.0000      1.000 0.000 1.000
#> sample_69     2  0.0000      1.000 0.000 1.000
#> sample_67     2  0.0000      1.000 0.000 1.000
#> sample_55     1  0.0000      0.982 1.000 0.000
#> sample_56     1  0.0000      0.982 1.000 0.000
#> sample_59     2  0.0000      1.000 0.000 1.000
#> sample_52     1  0.0000      0.982 1.000 0.000
#> sample_53     1  0.0000      0.982 1.000 0.000
#> sample_51     1  0.0000      0.982 1.000 0.000
#> sample_50     1  0.0000      0.982 1.000 0.000
#> sample_54     2  0.0000      1.000 0.000 1.000
#> sample_57     1  0.0000      0.982 1.000 0.000
#> sample_58     1  0.0000      0.982 1.000 0.000
#> sample_60     1  0.9170      0.515 0.668 0.332
#> sample_61     1  0.0000      0.982 1.000 0.000
#> sample_65     1  0.0000      0.982 1.000 0.000
#> sample_66     2  0.0000      1.000 0.000 1.000
#> sample_63     1  0.0000      0.982 1.000 0.000
#> sample_64     1  0.0000      0.982 1.000 0.000
#> sample_62     1  0.0000      0.982 1.000 0.000
#> sample_1      1  0.0000      0.982 1.000 0.000
#> sample_2      2  0.0000      1.000 0.000 1.000
#> sample_3      1  0.0000      0.982 1.000 0.000
#> sample_4      1  0.0000      0.982 1.000 0.000
#> sample_5      2  0.0000      1.000 0.000 1.000
#> sample_6      1  0.0000      0.982 1.000 0.000
#> sample_7      1  0.0000      0.982 1.000 0.000
#> sample_8      1  0.0000      0.982 1.000 0.000
#> sample_9      2  0.0000      1.000 0.000 1.000
#> sample_10     2  0.0672      0.992 0.008 0.992
#> sample_11     2  0.0000      1.000 0.000 1.000
#> sample_12     1  0.0000      0.982 1.000 0.000
#> sample_13     2  0.0000      1.000 0.000 1.000
#> sample_14     2  0.0000      1.000 0.000 1.000
#> sample_15     2  0.0000      1.000 0.000 1.000
#> sample_16     2  0.0000      1.000 0.000 1.000
#> sample_17     2  0.0000      1.000 0.000 1.000
#> sample_18     1  0.9286      0.489 0.656 0.344
#> sample_19     2  0.0000      1.000 0.000 1.000
#> sample_20     2  0.0000      1.000 0.000 1.000
#> sample_21     2  0.0000      1.000 0.000 1.000
#> sample_22     1  0.0000      0.982 1.000 0.000
#> sample_23     1  0.0000      0.982 1.000 0.000
#> sample_24     2  0.0000      1.000 0.000 1.000
#> sample_25     1  0.0000      0.982 1.000 0.000
#> sample_26     2  0.0000      1.000 0.000 1.000
#> sample_27     1  0.0000      0.982 1.000 0.000
#> sample_34     1  0.0000      0.982 1.000 0.000
#> sample_35     1  0.0000      0.982 1.000 0.000
#> sample_36     1  0.0000      0.982 1.000 0.000
#> sample_37     1  0.0000      0.982 1.000 0.000
#> sample_38     1  0.0000      0.982 1.000 0.000
#> sample_28     1  0.0000      0.982 1.000 0.000
#> sample_29     2  0.0000      1.000 0.000 1.000
#> sample_30     1  0.0000      0.982 1.000 0.000
#> sample_31     1  0.0000      0.982 1.000 0.000
#> sample_32     1  0.0000      0.982 1.000 0.000
#> sample_33     1  0.0000      0.982 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-skmeans-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-skmeans-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-skmeans-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-skmeans-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-skmeans-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-skmeans-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-skmeans-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>              n ALL.AML(p) k
#> ATC:skmeans 71   5.23e-05 2
#> ATC:skmeans 70   1.85e-12 3
#> ATC:skmeans 63   8.36e-12 4
#> ATC:skmeans 60   2.60e-10 5
#> ATC:skmeans 51   1.22e-07 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:pam

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "pam"]
# you can also extract it by
# res = res_list["ATC:pam"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'ATC' method.
#>   Subgroups are detected by 'pam' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-pam-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-pam-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.597           0.808       0.918         0.4755 0.518   0.518
#> 3 3 0.525           0.658       0.843         0.3445 0.690   0.478
#> 4 4 0.619           0.692       0.842         0.1433 0.861   0.645
#> 5 5 0.644           0.521       0.770         0.0768 0.896   0.653
#> 6 6 0.676           0.582       0.763         0.0422 0.856   0.458

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  0.0000     0.8929 1.000 0.000
#> sample_40     1  0.0000     0.8929 1.000 0.000
#> sample_42     2  0.9491     0.3684 0.368 0.632
#> sample_47     2  0.0000     0.9141 0.000 1.000
#> sample_48     2  0.0000     0.9141 0.000 1.000
#> sample_49     1  0.0000     0.8929 1.000 0.000
#> sample_41     2  0.0000     0.9141 0.000 1.000
#> sample_43     2  0.0000     0.9141 0.000 1.000
#> sample_44     2  0.0000     0.9141 0.000 1.000
#> sample_45     2  0.0000     0.9141 0.000 1.000
#> sample_46     2  0.0000     0.9141 0.000 1.000
#> sample_70     1  0.8909     0.6109 0.692 0.308
#> sample_71     1  0.9170     0.5669 0.668 0.332
#> sample_72     2  0.3879     0.8601 0.076 0.924
#> sample_68     2  0.0000     0.9141 0.000 1.000
#> sample_69     2  0.0000     0.9141 0.000 1.000
#> sample_67     2  0.8327     0.6100 0.264 0.736
#> sample_55     1  0.7376     0.7403 0.792 0.208
#> sample_56     1  0.0376     0.8908 0.996 0.004
#> sample_59     2  0.7299     0.7095 0.204 0.796
#> sample_52     1  0.0376     0.8910 0.996 0.004
#> sample_53     1  0.0000     0.8929 1.000 0.000
#> sample_51     1  0.0000     0.8929 1.000 0.000
#> sample_50     1  0.0000     0.8929 1.000 0.000
#> sample_54     2  0.9977    -0.0261 0.472 0.528
#> sample_57     1  0.0000     0.8929 1.000 0.000
#> sample_58     1  0.0000     0.8929 1.000 0.000
#> sample_60     1  0.8555     0.6527 0.720 0.280
#> sample_61     1  0.0000     0.8929 1.000 0.000
#> sample_65     1  0.0000     0.8929 1.000 0.000
#> sample_66     2  0.0000     0.9141 0.000 1.000
#> sample_63     1  0.0000     0.8929 1.000 0.000
#> sample_64     1  0.0000     0.8929 1.000 0.000
#> sample_62     1  0.0000     0.8929 1.000 0.000
#> sample_1      1  0.8909     0.6109 0.692 0.308
#> sample_2      1  1.0000     0.0788 0.500 0.500
#> sample_3      1  0.7602     0.7278 0.780 0.220
#> sample_4      1  0.8909     0.6109 0.692 0.308
#> sample_5      2  0.0000     0.9141 0.000 1.000
#> sample_6      1  0.6438     0.7817 0.836 0.164
#> sample_7      1  0.0000     0.8929 1.000 0.000
#> sample_8      1  0.0000     0.8929 1.000 0.000
#> sample_9      2  0.3733     0.8624 0.072 0.928
#> sample_10     1  0.8813     0.6236 0.700 0.300
#> sample_11     2  0.6148     0.7802 0.152 0.848
#> sample_12     1  0.0000     0.8929 1.000 0.000
#> sample_13     2  0.0000     0.9141 0.000 1.000
#> sample_14     2  0.0000     0.9141 0.000 1.000
#> sample_15     2  0.0000     0.9141 0.000 1.000
#> sample_16     2  0.0000     0.9141 0.000 1.000
#> sample_17     2  0.0376     0.9118 0.004 0.996
#> sample_18     1  0.8861     0.6173 0.696 0.304
#> sample_19     2  0.0000     0.9141 0.000 1.000
#> sample_20     2  0.0000     0.9141 0.000 1.000
#> sample_21     2  0.0000     0.9141 0.000 1.000
#> sample_22     1  0.0000     0.8929 1.000 0.000
#> sample_23     1  0.4815     0.8285 0.896 0.104
#> sample_24     2  0.0000     0.9141 0.000 1.000
#> sample_25     1  0.7528     0.7317 0.784 0.216
#> sample_26     2  0.9358     0.4048 0.352 0.648
#> sample_27     1  0.0000     0.8929 1.000 0.000
#> sample_34     1  0.0000     0.8929 1.000 0.000
#> sample_35     1  0.0000     0.8929 1.000 0.000
#> sample_36     1  0.0000     0.8929 1.000 0.000
#> sample_37     1  0.0000     0.8929 1.000 0.000
#> sample_38     1  0.0000     0.8929 1.000 0.000
#> sample_28     1  0.0000     0.8929 1.000 0.000
#> sample_29     1  0.9754     0.3881 0.592 0.408
#> sample_30     1  0.0000     0.8929 1.000 0.000
#> sample_31     1  0.0000     0.8929 1.000 0.000
#> sample_32     1  0.0000     0.8929 1.000 0.000
#> sample_33     1  0.0000     0.8929 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-pam-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-pam-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-pam-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-pam-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-pam-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-pam-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-pam-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n ALL.AML(p) k
#> ATC:pam 67   1.65e-04 2
#> ATC:pam 59   3.74e-06 3
#> ATC:pam 57   5.47e-07 4
#> ATC:pam 46   1.05e-05 5
#> ATC:pam 47   1.08e-06 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:mclust

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "mclust"]
# you can also extract it by
# res = res_list["ATC:mclust"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'ATC' method.
#>   Subgroups are detected by 'mclust' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-mclust-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-mclust-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.360           0.711       0.828         0.4257 0.493   0.493
#> 3 3 0.376           0.532       0.683         0.4004 0.718   0.490
#> 4 4 0.424           0.471       0.640         0.1614 0.733   0.362
#> 5 5 0.471           0.479       0.646         0.0578 0.910   0.666
#> 6 6 0.701           0.599       0.790         0.1118 0.869   0.485

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  1.0000     -0.361 0.504 0.496
#> sample_40     2  0.9209      0.748 0.336 0.664
#> sample_42     1  0.8713      0.475 0.708 0.292
#> sample_47     2  0.5408      0.745 0.124 0.876
#> sample_48     2  0.0000      0.692 0.000 1.000
#> sample_49     2  0.9044      0.766 0.320 0.680
#> sample_41     2  0.0672      0.697 0.008 0.992
#> sample_43     2  0.9460      0.697 0.364 0.636
#> sample_44     2  0.4431      0.736 0.092 0.908
#> sample_45     2  0.4431      0.736 0.092 0.908
#> sample_46     2  0.7056      0.759 0.192 0.808
#> sample_70     2  0.9044      0.765 0.320 0.680
#> sample_71     1  0.8713      0.475 0.708 0.292
#> sample_72     1  0.8713      0.475 0.708 0.292
#> sample_68     2  0.0000      0.692 0.000 1.000
#> sample_69     2  0.0000      0.692 0.000 1.000
#> sample_67     1  0.1633      0.844 0.976 0.024
#> sample_55     2  0.9209      0.749 0.336 0.664
#> sample_56     2  0.9044      0.766 0.320 0.680
#> sample_59     2  0.9795      0.596 0.416 0.584
#> sample_52     1  0.0938      0.851 0.988 0.012
#> sample_53     1  0.0376      0.850 0.996 0.004
#> sample_51     1  0.0376      0.850 0.996 0.004
#> sample_50     1  0.0000      0.846 1.000 0.000
#> sample_54     1  0.2948      0.822 0.948 0.052
#> sample_57     1  0.1633      0.845 0.976 0.024
#> sample_58     1  0.1184      0.849 0.984 0.016
#> sample_60     1  0.2948      0.822 0.948 0.052
#> sample_61     1  0.0672      0.852 0.992 0.008
#> sample_65     1  0.0672      0.852 0.992 0.008
#> sample_66     1  0.9963      0.108 0.536 0.464
#> sample_63     1  0.1414      0.847 0.980 0.020
#> sample_64     1  0.1414      0.847 0.980 0.020
#> sample_62     1  0.1414      0.847 0.980 0.020
#> sample_1      2  0.9044      0.766 0.320 0.680
#> sample_2      1  0.8713      0.475 0.708 0.292
#> sample_3      2  0.9358      0.728 0.352 0.648
#> sample_4      2  0.9044      0.766 0.320 0.680
#> sample_5      2  0.0000      0.692 0.000 1.000
#> sample_6      2  0.9129      0.760 0.328 0.672
#> sample_7      2  0.9044      0.766 0.320 0.680
#> sample_8      2  0.9933      0.500 0.452 0.548
#> sample_9      2  0.9129      0.760 0.328 0.672
#> sample_10     1  0.9608      0.157 0.616 0.384
#> sample_11     1  0.8813      0.455 0.700 0.300
#> sample_12     1  0.0672      0.852 0.992 0.008
#> sample_13     2  0.0000      0.692 0.000 1.000
#> sample_14     2  0.6438      0.754 0.164 0.836
#> sample_15     2  0.0000      0.692 0.000 1.000
#> sample_16     2  0.9044      0.766 0.320 0.680
#> sample_17     2  0.8955      0.768 0.312 0.688
#> sample_18     2  0.9087      0.763 0.324 0.676
#> sample_19     2  0.8955      0.768 0.312 0.688
#> sample_20     2  0.0000      0.692 0.000 1.000
#> sample_21     2  0.1184      0.701 0.016 0.984
#> sample_22     1  0.8713      0.475 0.708 0.292
#> sample_23     2  0.9922      0.520 0.448 0.552
#> sample_24     2  0.4690      0.738 0.100 0.900
#> sample_25     1  0.8713      0.475 0.708 0.292
#> sample_26     2  0.9044      0.766 0.320 0.680
#> sample_27     2  0.9044      0.766 0.320 0.680
#> sample_34     1  0.0672      0.852 0.992 0.008
#> sample_35     1  0.0672      0.852 0.992 0.008
#> sample_36     1  0.0000      0.846 1.000 0.000
#> sample_37     1  0.0000      0.846 1.000 0.000
#> sample_38     1  0.0376      0.850 0.996 0.004
#> sample_28     1  0.0672      0.852 0.992 0.008
#> sample_29     1  0.0672      0.852 0.992 0.008
#> sample_30     1  0.0376      0.850 0.996 0.004
#> sample_31     1  0.0672      0.852 0.992 0.008
#> sample_32     1  0.0672      0.852 0.992 0.008
#> sample_33     1  0.0672      0.852 0.992 0.008

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-mclust-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-mclust-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-mclust-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-mclust-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-mclust-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-mclust-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-mclust-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>             n ALL.AML(p) k
#> ATC:mclust 62   1.26e-12 2
#> ATC:mclust 46   5.78e-10 3
#> ATC:mclust 46   6.40e-09 4
#> ATC:mclust 51   1.68e-09 5
#> ATC:mclust 49   2.22e-09 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.


ATC:NMF**

The object with results only for a single top-value method and a single partition method can be extracted as:

res = res_list["ATC", "NMF"]
# you can also extract it by
# res = res_list["ATC:NMF"]

A summary of res and all the functions that can be applied to it:

res
#> A 'ConsensusPartition' object with k = 2, 3, 4, 5, 6.
#>   On a matrix with 4116 rows and 72 columns.
#>   Top rows (412, 824, 1235, 1646, 2058) are extracted by 'ATC' method.
#>   Subgroups are detected by 'NMF' method.
#>   Performed in total 1250 partitions by row resampling.
#>   Best k for subgroups seems to be 2.
#> 
#> Following methods can be applied to this 'ConsensusPartition' object:
#>  [1] "cola_report"             "collect_classes"         "collect_plots"          
#>  [4] "collect_stats"           "colnames"                "compare_signatures"     
#>  [7] "consensus_heatmap"       "dimension_reduction"     "functional_enrichment"  
#> [10] "get_anno_col"            "get_anno"                "get_classes"            
#> [13] "get_consensus"           "get_matrix"              "get_membership"         
#> [16] "get_param"               "get_signatures"          "get_stats"              
#> [19] "is_best_k"               "is_stable_k"             "membership_heatmap"     
#> [22] "ncol"                    "nrow"                    "plot_ecdf"              
#> [25] "rownames"                "select_partition_number" "show"                   
#> [28] "suggest_best_k"          "test_to_known_factors"

collect_plots() function collects all the plots made from res for all k (number of partitions) into one single page to provide an easy and fast comparison between different k.

collect_plots(res)

plot of chunk ATC-NMF-collect-plots

The plots are:

All the plots in panels can be made by individual functions and they are plotted later in this section.

select_partition_number() produces several plots showing different statistics for choosing “optimized” k. There are following statistics:

The detailed explanations of these statistics can be found in the cola vignette.

Generally speaking, lower PAC score, higher mean silhouette score or higher concordance corresponds to better partition. Rand index and Jaccard index measure how similar the current partition is compared to partition with k-1. If they are too similar, we won't accept k is better than k-1.

select_partition_number(res)

plot of chunk ATC-NMF-select-partition-number

The numeric values for all these statistics can be obtained by get_stats().

get_stats(res)
#>   k 1-PAC mean_silhouette concordance area_increased  Rand Jaccard
#> 2 2 0.972           0.927       0.973         0.5047 0.495   0.495
#> 3 3 0.642           0.776       0.885         0.3090 0.782   0.584
#> 4 4 0.558           0.673       0.788         0.1178 0.900   0.717
#> 5 5 0.590           0.557       0.763         0.0700 0.876   0.587
#> 6 6 0.597           0.514       0.685         0.0426 0.942   0.734

suggest_best_k() suggests the best \(k\) based on these statistics. The rules are as follows:

suggest_best_k(res)
#> [1] 2

Following shows the table of the partitions (You need to click the show/hide code output link to see it). The membership matrix (columns with name p*) is inferred by clue::cl_consensus() function with the SE method. Basically the value in the membership matrix represents the probability to belong to a certain group. The finall class label for an item is determined with the group with highest probability it belongs to.

In get_classes() function, the entropy is calculated from the membership matrix and the silhouette score is calculated from the consensus matrix.

show/hide code output

cbind(get_classes(res, k = 2), get_membership(res, k = 2))
#>           class entropy silhouette    p1    p2
#> sample_39     1  0.0000     0.9711 1.000 0.000
#> sample_40     1  0.0000     0.9711 1.000 0.000
#> sample_42     2  0.0000     0.9719 0.000 1.000
#> sample_47     2  0.0000     0.9719 0.000 1.000
#> sample_48     2  0.0000     0.9719 0.000 1.000
#> sample_49     1  0.0000     0.9711 1.000 0.000
#> sample_41     2  0.0000     0.9719 0.000 1.000
#> sample_43     2  0.0000     0.9719 0.000 1.000
#> sample_44     2  0.0000     0.9719 0.000 1.000
#> sample_45     2  0.0000     0.9719 0.000 1.000
#> sample_46     2  0.0000     0.9719 0.000 1.000
#> sample_70     2  0.1184     0.9594 0.016 0.984
#> sample_71     2  0.3114     0.9196 0.056 0.944
#> sample_72     2  0.0000     0.9719 0.000 1.000
#> sample_68     2  0.0000     0.9719 0.000 1.000
#> sample_69     2  0.0000     0.9719 0.000 1.000
#> sample_67     2  0.0000     0.9719 0.000 1.000
#> sample_55     1  0.0376     0.9679 0.996 0.004
#> sample_56     1  0.0000     0.9711 1.000 0.000
#> sample_59     2  0.0376     0.9691 0.004 0.996
#> sample_52     1  0.0000     0.9711 1.000 0.000
#> sample_53     1  0.0000     0.9711 1.000 0.000
#> sample_51     1  0.0000     0.9711 1.000 0.000
#> sample_50     1  0.0000     0.9711 1.000 0.000
#> sample_54     2  0.0376     0.9691 0.004 0.996
#> sample_57     1  0.0000     0.9711 1.000 0.000
#> sample_58     1  0.0000     0.9711 1.000 0.000
#> sample_60     1  0.9833     0.2488 0.576 0.424
#> sample_61     1  0.0000     0.9711 1.000 0.000
#> sample_65     1  0.0000     0.9711 1.000 0.000
#> sample_66     2  0.0000     0.9719 0.000 1.000
#> sample_63     1  0.0000     0.9711 1.000 0.000
#> sample_64     1  0.0000     0.9711 1.000 0.000
#> sample_62     1  0.0000     0.9711 1.000 0.000
#> sample_1      1  0.4690     0.8683 0.900 0.100
#> sample_2      2  0.0000     0.9719 0.000 1.000
#> sample_3      1  0.0376     0.9679 0.996 0.004
#> sample_4      2  1.0000    -0.0262 0.500 0.500
#> sample_5      2  0.0000     0.9719 0.000 1.000
#> sample_6      1  0.0000     0.9711 1.000 0.000
#> sample_7      1  0.0000     0.9711 1.000 0.000
#> sample_8      1  0.0000     0.9711 1.000 0.000
#> sample_9      2  0.0000     0.9719 0.000 1.000
#> sample_10     2  0.8763     0.5652 0.296 0.704
#> sample_11     2  0.0000     0.9719 0.000 1.000
#> sample_12     1  0.0000     0.9711 1.000 0.000
#> sample_13     2  0.0000     0.9719 0.000 1.000
#> sample_14     2  0.0000     0.9719 0.000 1.000
#> sample_15     2  0.0000     0.9719 0.000 1.000
#> sample_16     2  0.0000     0.9719 0.000 1.000
#> sample_17     2  0.0000     0.9719 0.000 1.000
#> sample_18     1  0.9983     0.0707 0.524 0.476
#> sample_19     2  0.0000     0.9719 0.000 1.000
#> sample_20     2  0.0000     0.9719 0.000 1.000
#> sample_21     2  0.0000     0.9719 0.000 1.000
#> sample_22     1  0.0000     0.9711 1.000 0.000
#> sample_23     1  0.0000     0.9711 1.000 0.000
#> sample_24     2  0.0000     0.9719 0.000 1.000
#> sample_25     1  0.1184     0.9572 0.984 0.016
#> sample_26     2  0.0938     0.9629 0.012 0.988
#> sample_27     1  0.0000     0.9711 1.000 0.000
#> sample_34     1  0.0000     0.9711 1.000 0.000
#> sample_35     1  0.0000     0.9711 1.000 0.000
#> sample_36     1  0.0000     0.9711 1.000 0.000
#> sample_37     1  0.0000     0.9711 1.000 0.000
#> sample_38     1  0.0000     0.9711 1.000 0.000
#> sample_28     1  0.0000     0.9711 1.000 0.000
#> sample_29     2  0.0000     0.9719 0.000 1.000
#> sample_30     1  0.0000     0.9711 1.000 0.000
#> sample_31     1  0.0000     0.9711 1.000 0.000
#> sample_32     1  0.0000     0.9711 1.000 0.000
#> sample_33     1  0.0000     0.9711 1.000 0.000

Heatmaps for the consensus matrix. It visualizes the probability of two samples to be in a same group.

consensus_heatmap(res, k = 2)

plot of chunk tab-ATC-NMF-consensus-heatmap-1

Heatmaps for the membership of samples in all partitions to see how consistent they are:

membership_heatmap(res, k = 2)

plot of chunk tab-ATC-NMF-membership-heatmap-1

As soon as we have had the classes for columns, we can look for signatures which are significantly different between classes which can be candidate marks for certain classes. Following are the heatmaps for signatures.

Signature heatmaps where rows are scaled:

get_signatures(res, k = 2)

plot of chunk tab-ATC-NMF-get-signatures-1

Signature heatmaps where rows are not scaled:

get_signatures(res, k = 2, scale_rows = FALSE)

plot of chunk tab-ATC-NMF-get-signatures-no-scale-1

Compare the overlap of signatures from different k:

compare_signatures(res)

plot of chunk ATC-NMF-signature_compare

get_signature() returns a data frame invisibly. TO get the list of signatures, the function call should be assigned to a variable explicitly. In following code, if plot argument is set to FALSE, no heatmap is plotted while only the differential analysis is performed.

# code only for demonstration
tb = get_signature(res, k = ..., plot = FALSE)

An example of the output of tb is:

#>   which_row         fdr    mean_1    mean_2 scaled_mean_1 scaled_mean_2 km
#> 1        38 0.042760348  8.373488  9.131774    -0.5533452     0.5164555  1
#> 2        40 0.018707592  7.106213  8.469186    -0.6173731     0.5762149  1
#> 3        55 0.019134737 10.221463 11.207825    -0.6159697     0.5749050  1
#> 4        59 0.006059896  5.921854  7.869574    -0.6899429     0.6439467  1
#> 5        60 0.018055526  8.928898 10.211722    -0.6204761     0.5791110  1
#> 6        98 0.009384629 15.714769 14.887706     0.6635654    -0.6193277  2
...

The columns in tb are:

  1. which_row: row indices corresponding to the input matrix.
  2. fdr: FDR for the differential test.
  3. mean_x: The mean value in group x.
  4. scaled_mean_x: The mean value in group x after rows are scaled.
  5. km: Row groups if k-means clustering is applied to rows.

UMAP plot which shows how samples are separated.

dimension_reduction(res, k = 2, method = "UMAP")

plot of chunk tab-ATC-NMF-dimension-reduction-1

Following heatmap shows how subgroups are split when increasing k:

collect_classes(res)

plot of chunk ATC-NMF-collect-classes

Test correlation between subgroups and known annotations. If the known annotation is numeric, one-way ANOVA test is applied, and if the known annotation is discrete, chi-squared contingency table test is applied.

test_to_known_factors(res)
#>          n ALL.AML(p) k
#> ATC:NMF 69   5.41e-05 2
#> ATC:NMF 65   2.97e-08 3
#> ATC:NMF 62   1.00e-09 4
#> ATC:NMF 48   4.17e-08 5
#> ATC:NMF 44   1.87e-07 6

If matrix rows can be associated to genes, consider to use functional_enrichment(res, ...) to perform function enrichment for the signature genes. See this vignette for more detailed explanations.

Session info

sessionInfo()
#> R version 3.6.0 (2019-04-26)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: CentOS Linux 7 (Core)
#> 
#> Matrix products: default
#> BLAS:   /usr/lib64/libblas.so.3.4.2
#> LAPACK: /usr/lib64/liblapack.so.3.4.2
#> 
#> locale:
#>  [1] LC_CTYPE=en_GB.UTF-8       LC_NUMERIC=C               LC_TIME=en_GB.UTF-8       
#>  [4] LC_COLLATE=en_GB.UTF-8     LC_MONETARY=en_GB.UTF-8    LC_MESSAGES=en_GB.UTF-8   
#>  [7] LC_PAPER=en_GB.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
#> [10] LC_TELEPHONE=C             LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C       
#> 
#> attached base packages:
#> [1] grid      stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] genefilter_1.66.0    ComplexHeatmap_2.3.1 markdown_1.1         knitr_1.26          
#> [5] GetoptLong_0.1.7     cola_1.3.2          
#> 
#> loaded via a namespace (and not attached):
#>  [1] circlize_0.4.8       shape_1.4.4          xfun_0.11            slam_0.1-46         
#>  [5] lattice_0.20-38      splines_3.6.0        colorspace_1.4-1     vctrs_0.2.0         
#>  [9] stats4_3.6.0         blob_1.2.0           XML_3.98-1.20        survival_2.44-1.1   
#> [13] rlang_0.4.2          pillar_1.4.2         DBI_1.0.0            BiocGenerics_0.30.0 
#> [17] bit64_0.9-7          RColorBrewer_1.1-2   matrixStats_0.55.0   stringr_1.4.0       
#> [21] GlobalOptions_0.1.1  evaluate_0.14        memoise_1.1.0        Biobase_2.44.0      
#> [25] IRanges_2.18.3       parallel_3.6.0       AnnotationDbi_1.46.1 highr_0.8           
#> [29] Rcpp_1.0.3           xtable_1.8-4         backports_1.1.5      S4Vectors_0.22.1    
#> [33] annotate_1.62.0      skmeans_0.2-11       bit_1.1-14           microbenchmark_1.4-7
#> [37] brew_1.0-6           impute_1.58.0        rjson_0.2.20         png_0.1-7           
#> [41] digest_0.6.23        stringi_1.4.3        polyclip_1.10-0      clue_0.3-57         
#> [45] tools_3.6.0          bitops_1.0-6         magrittr_1.5         eulerr_6.0.0        
#> [49] RCurl_1.95-4.12      RSQLite_2.1.4        tibble_2.1.3         cluster_2.1.0       
#> [53] crayon_1.3.4         pkgconfig_2.0.3      zeallot_0.1.0        Matrix_1.2-17       
#> [57] xml2_1.2.2           httr_1.4.1           R6_2.4.1             mclust_5.4.5        
#> [61] compiler_3.6.0