aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/analysis/analysis.Rmd
diff options
context:
space:
mode:
authorJP Appel <jeanpierre.appel01@gmail.com>2024-04-27 14:03:43 -0400
committerJP Appel <jeanpierre.appel01@gmail.com>2024-04-27 14:03:43 -0400
commit922c57945e531220d3191a657bdf382ab1d95a99 (patch)
tree2fda11297b8c0268f9b912346e5456bdd250cbda /analysis/analysis.Rmd
parent25f3bba424dbb017f69f6bfae91acedfc0702d1b (diff)
improved paralellism graph coloring
Diffstat (limited to 'analysis/analysis.Rmd')
-rw-r--r--analysis/analysis.Rmd11
1 files changed, 6 insertions, 5 deletions
diff --git a/analysis/analysis.Rmd b/analysis/analysis.Rmd
index 738db5f..4dd0831 100644
--- a/analysis/analysis.Rmd
+++ b/analysis/analysis.Rmd
@@ -291,6 +291,8 @@ Most likely this resource is a specialized arrithmetic unit.
On the GPU we have a similar issue, the register count per thread from about 32 to 48 when computing the multibrot and multicorn fractals.
+(Thanks to Yousuf for improving the legibility of these graphs)
+
```{r program_parallelism, echo=FALSE}
parallel <- full_data %>%
filter(program != "serial") %>%
@@ -302,8 +304,8 @@ parallel <- full_data %>%
```{r parallel_plot}
parallel_plot <- parallel %>%
- ggplot(aes(x = samples, y = percentage_parallel, color = program,
- shape = factor(threads))) +
+ ggplot(aes(x = samples, y = percentage_parallel,
+ color = interaction(program,threads))) +
geom_point() +
stat_summary(fun = median, geom = "line",
aes(group = interaction(program, fractal, threads))) +
@@ -311,8 +313,7 @@ parallel_plot <- parallel %>%
labs(title = "Program/Fractal Parallelism",
x = "Samples",
y = "Percentage Parallel",
- color = "Implementation",
- shape = "Threads")
+ color = "Implementation/Threads")
ggplotly(parallel_plot)
```
@@ -321,4 +322,4 @@ ggplotly(parallel_plot)
# Conclusions
From this data we can conclude that the runtime of fractal generation is linear with respect to the number of samples.
-We also conclude that the CUDA implementation is highly parallel, with the shared implementations having varying degrees of parallelism. \ No newline at end of file
+We also conclude that the CUDA implementation is highly parallel, with the shared implementations having varying degrees of parallelism.