The Karate Club social network dataset is provided as a gml file, containing 34 nodes and 78 edges.
# Start the timer
t1 <- system.time({
dataset_path <- system.file("extdata", "karate.gml", package = "arlclustering")
if (dataset_path == "") {
stop("karate.gml file not found")
}
g <- arlc_get_network_dataset(dataset_path, "Karate Club")
g$graphLabel
g$totalNodes
g$totalEdges
g$averageDegree
})
# Display the total processing time
message("Graph loading Processing Time: ", t1["elapsed"], " seconds\n")
#> Graph loading Processing Time: 0.0120000000000005 seconds
Next, we generate the transactions datasets from the graph g. The length of the filtred transactional dataset is 28.
We obtain the apriori thresholds for the generated transactions. The following are the thresholds for the apriori execution: - The Minimum Support : 0.1 - The Minimum Confidence : 0.5 - The Lift : 7 - The Gross Rules length : 66 - The selection Ratio : 2
Graph loading Processing Time: 0.014 seconds.
# Start the timer
t3 <- system.time({
params <- arlc_get_apriori_thresholds(transactions,
supportRange = seq(0.1, 0.2, by = 0.1),
Conf = 0.5)
params$minSupp
params$minConf
params$bestLift
params$lenRules
params$ratio
})
# Display the total processing time
message("Graph loading Processing Time: ", t3["elapsed"], " seconds\n")
#> Graph loading Processing Time: 0.00999999999999801 seconds
We use the obtained parameters to generate gross rules.
# Start the timer
t4 <- system.time({
minLenRules <- 1
maxLenRules <- params$lenRules
if (!is.finite(maxLenRules) || maxLenRules > 5*length(transactions)) {
maxLenRules <- 5*length(transactions)
}
grossRules <- arlc_gen_gross_rules(transactions,
minSupp = params$minSupp,
minConf = params$minConf,
minLenRules = minLenRules+1,
maxLenRules = maxLenRules)
})
#> Apriori
#>
#> Parameter specification:
#> confidence minval smax arem aval originalSupport maxtime support minlen
#> 0.5 0.1 1 none FALSE TRUE 5 0.1 2
#> maxlen target ext
#> 66 rules TRUE
#>
#> Algorithmic control:
#> filter tree heap memopt load sort verbose
#> 0.1 TRUE TRUE FALSE TRUE 2 TRUE
#>
#> Absolute minimum support count: 2
#>
#> set item appearances ...[0 item(s)] done [0.00s].
#> set transactions ...[34 item(s), 28 transaction(s)] done [0.00s].
#> sorting and recoding items ... [22 item(s)] done [0.00s].
#> creating transaction tree ... done [0.00s].
#> checking subsets of size 1 2 3 4 done [0.00s].
#> writing ... [65 rule(s)] done [0.00s].
#> creating S4 object ... done [0.00s].
We filter out redundant rules from the generated gross rules. Next, we filter out non-significant rules from the non-redundant rules, and we obtain the 50 rule items.
t5 <- system.time({
NonRedRules <- arlc_get_NonR_rules(grossRules$GrossRules)
NonRSigRules <- arlc_get_significant_rules(transactions,
NonRedRules$FiltredRules)
NonRSigRules$TotFiltredRules
})
# Display the total number of clusters and the total processing time
message("Clearing rules Processing Time: ", t5["elapsed"], " seconds\n")
#> Clearing rules Processing Time: 0.0820000000000007 seconds
We clean the final set of rules to prepare for clustering. Then, we generate clusters based on the cleaned rules. The total identified clusters is 12 clusters.
t6 <- system.time({
cleanedRules <- arlc_clean_final_rules(NonRSigRules$FiltredRules)
clusters <- arlc_generate_clusters(cleanedRules)
clusters$TotClusters
})
# Display the total number of clusters and the total processing time
message("Cleaning final rules Processing Time: ", t6["elapsed"], " seconds\n")
#> Cleaning final rules Processing Time: 0.00600000000000023 seconds
Finally, we visualize the identified clusters.
arlc_clusters_plot(g$graph,
g$graphLabel,
clusters$Clusters)
#>
#> Total Identified Clusters: 12
#> =========================
#> Community 01:1 2 3 4 8 14
#> Community 02:2 3 4 8 9 14
#> Community 03:3 4 8 14 31 32 34
#> Community 04:5 6
#> Community 05:7 11
#> Community 06:9 14 32 33
#> Community 07:14 20
#> Community 08:24 32 34
#> Community 09:28 33
#> Community 10:29 33
#> Community 11:30 34
#> Community 12:33 34
#> =========================