diff options
| author | JP Appel <jeanpierre.appel01@gmail.com> | 2024-04-27 14:03:43 -0400 |
|---|---|---|
| committer | JP Appel <jeanpierre.appel01@gmail.com> | 2024-04-27 14:03:43 -0400 |
| commit | 922c57945e531220d3191a657bdf382ab1d95a99 (patch) | |
| tree | 2fda11297b8c0268f9b912346e5456bdd250cbda | |
| parent | 25f3bba424dbb017f69f6bfae91acedfc0702d1b (diff) | |
improved paralellism graph coloring
| -rw-r--r-- | .gitignore | 1 | ||||
| -rw-r--r-- | analysis/analysis.Rmd | 11 |
2 files changed, 7 insertions, 5 deletions
@@ -60,3 +60,4 @@ modules.order Module.symvers Mkfile.old dkms.conf +.Rproj.user diff --git a/analysis/analysis.Rmd b/analysis/analysis.Rmd index 738db5f..4dd0831 100644 --- a/analysis/analysis.Rmd +++ b/analysis/analysis.Rmd @@ -291,6 +291,8 @@ Most likely this resource is a specialized arrithmetic unit. On the GPU we have a similar issue, the register count per thread from about 32 to 48 when computing the multibrot and multicorn fractals. +(Thanks to Yousuf for improving the legibility of these graphs) + ```{r program_parallelism, echo=FALSE} parallel <- full_data %>% filter(program != "serial") %>% @@ -302,8 +304,8 @@ parallel <- full_data %>% ```{r parallel_plot} parallel_plot <- parallel %>% - ggplot(aes(x = samples, y = percentage_parallel, color = program, - shape = factor(threads))) + + ggplot(aes(x = samples, y = percentage_parallel, + color = interaction(program,threads))) + geom_point() + stat_summary(fun = median, geom = "line", aes(group = interaction(program, fractal, threads))) + @@ -311,8 +313,7 @@ parallel_plot <- parallel %>% labs(title = "Program/Fractal Parallelism", x = "Samples", y = "Percentage Parallel", - color = "Implementation", - shape = "Threads") + color = "Implementation/Threads") ggplotly(parallel_plot) ``` @@ -321,4 +322,4 @@ ggplotly(parallel_plot) # Conclusions From this data we can conclude that the runtime of fractal generation is linear with respect to the number of samples. -We also conclude that the CUDA implementation is highly parallel, with the shared implementations having varying degrees of parallelism.
\ No newline at end of file +We also conclude that the CUDA implementation is highly parallel, with the shared implementations having varying degrees of parallelism. |
