Author: Cameron Ashby

Date: May 3, 2026

Series: Capstone Development Journal

This is the post I wanted to write for a while, but could not write until I was on the other side of the defense. The Master of Science in Computer Science at Full Sail University was twelve months of compressed graduate work, ten content courses, and a four-month capstone that produced FastFold Suite, a unified protein structure prediction platform that benchmarks ESMFold, OmegaFold, and AlphaFold 2 on CASP14 targets. I defended on April 24, 2026, and received a conditional pass that converted to a full pass once the supplemental validation package was on file. With the Graduation Launch course wrapping up and the Software Project: Deployment and Professional Presentation course producing this very post, it is the right moment to look back.

This post does two things. First, it walks through every course in the program and notes what each contributed (or did not) to the final capstone product. Second, it offers honest feedback on the program itself: where it shone, where it struggled, what I wish had been different, and what I would tell someone considering enrolling.

The Courses

The MSCS at Full Sail runs as a sequence of one-month accelerated courses. Each course produces a graded deliverable, usually an IEEE-formatted paper or a structured project, along with discussion posts and instructor feedback. I took them in this order:

COS540-O – Research Approaches in Computer Science  (Term C202505)

This was the first course in the program and, in retrospect, the most important one, even though I did not fully appreciate it at the time. The course teaches you how to read a research paper, write one, and use the IEEE format. Every subsequent paper I wrote in this program leaned on these foundations. The capstone thesis is an IEEE paper. The validation paper that resolved the conditional pass is an IEEE paper. The two journal submissions I am preparing are IEEE papers. Without this course, I would have spent the capstone learning the IEEE format from scratch instead of using it as a baseline.

What I would tell incoming students: do not treat this course as a warm-up. Build the literature search habits here. The reference manager workflow you set up in week two of this course is the same one you will use to manage 76+ sources for the doctoral lit review later.

COS550-O – Advanced Software Engineering  (Term C202506)

This course taught the architectural patterns I ended up using to build FastFold Suite. The platform consists of a Flask backend, a React frontend, three isolated conda environments for the three models, a SQLite database with four tables, and a verification harness that runs independently of the application code. That is a real software architecture, and it works because of the patterns covered in this course. Separation of concerns, dependency isolation, contract-first API design, and the discipline of writing tests before you trust your own code.

The capstone-relevant lesson: when you have three deep-learning frameworks that cannot share a Python environment (PyTorch, JAX/Haiku, and OmegaFold’s pinned NumPy 1.x), the answer is not to fight the dependency graph. The answer is environment isolation with subprocess invocation. That solution came directly from the architectural thinking taught here.

COS560-O – Data Science  (Term C202507)

Data Science gave me the statistical literacy I needed to make the FastFold benchmark credible. Paired t-tests on matched protein targets, Wilcoxon signed-rank as a non-parametric confirmation, one-way ANOVA for the fold-class effect, p-value interpretation, the difference between statistical and practical significance, and when to use parametric versus non-parametric tests. All of it came from this course. The fold-class ANOVA finding (F(2,41) = 3.42, p < 0.05 for ESMFold but not OmegaFold or AlphaFold 2) is the most novel scientific contribution of my thesis, and I would not have known how to construct that test without this course.

Lesson for incoming students: take the assignments seriously, even when they feel disconnected from your eventual capstone. The Jupyter notebooks I built in this course served as the template for the statistical analysis modules in the FastFold Suite.

COS630-O – Data Visualization and Extended Reality  (Term C202508)

Honest assessment: this is the course with the loosest connection to my final product. The XR portion of the course was interesting, but did not show up in the capstone. The data visualization portion did, indirectly. The benchmark dashboard in FastFold Suite uses sortable tables, summary statistics, and per-target drill-downs that present comparative data clearly. The principles for that came from this course.

I would have found more value if the course had spent more time on scientific visualization (chart selection for distributions, error bars, visual encoding of statistical significance) and less time on XR for its own sake. That is a curriculum suggestion, not a complaint.

COS570-O – Advanced Artificial Intelligence  (Term C202509)

This was the course that took me from “I have read about AlphaFold” to “I understand what a transformer attention head is doing during inference.” The capstone uses three deep-learning models with very different architectures: ESMFold’s 3-billion-parameter language model; OmegaFold’s 670M-parameter language model plus Geoformer plus 10 recycling iterations; and AlphaFold 2’s 48-block Evoformer with Invariant Point Attention. Reading the original papers and explaining the architectural differences in my thesis discussion section required the foundation this course built.

The discussion-section claim that ESMFold’s fold-class sensitivity is a consequence of its architecture (no recycling iterations, no dedicated geometric module, all the structural reasoning happening inside the language model) is the kind of claim you can only make if you understand what those architectural pieces do. This course is what made that claim possible.

COS590-O – Human-Computer Interaction  (Term C202510)

HCI was where I started thinking about the user of FastFold Suite as a real person rather than an abstraction. The intended audience is structural biologists, drug discovery teams, and computational biologists working with orphan proteins. Most of them are not software engineers. The platform’s web interface, sequence input validation, model-selection checkboxes, and 3D structure viewer (NGL Viewer) all came out of treating the interface as something a domain expert who is not a coder needs to use without friction.

The course material on heuristic evaluation and cognitive walkthroughs directly informed the user testing protocol I wrote for the capstone deliverables. Recognition over recall, error prevention, consistency. These are real principles that real users notice when they are missing.

COS640-O – HCI Application Development  (Term C202511)

This course is where the React skills came from. The FastFold Suite frontend is roughly 2,380 lines of React. The component structure, the state management patterns, and the way the dashboard handles asynchronous benchmark progress were all of it was scaffolded from what I built in this course. I came into the program comfortable with backend work and broadcast operations. I left this course able to ship a working frontend.

Practical lesson: the assignments here are deliverables you will reuse. The component patterns I built for a class project ended up in the capstone’s production code. Build them as if they will be reused, because they will be.

COS580-O – Machine Learning  (Term C202512)

This course covered the classical ML foundation: regression, classification, clustering, supervised and unsupervised learning, train/test/validation splits, cross-validation, and evaluation metrics. The capstone does not train models from scratch (it benchmarks pre-trained models), but the evaluation framework I built rests directly on the principles taught here. How do you compare two models? What is a fair test? When is a difference real, and when is it noise? Why does coverage matter for statistical power? Those questions are this course’s questions, and the capstone’s answers are this course’s answers.

The pLDDT confidence reversal finding (significant in partial benchmark runs at p = 0.004, not significant in the full 50/50/50 run at p = 0.171) is a textbook lesson in why you do not report results from incomplete data. That lesson came from this course, and it became one of the three key findings in my thesis.

COS650-O – Software Project: Research, Planning, and Design  (Term C202601)

This is the first capstone course, where the scope of the work becomes real. The deliverables in this month included the High-Level Design Document, the IRB documentation, the literature review for the project, the initial architecture, and the project plan. The course forces you to think about the entire system before you write code, and that discipline saved me months of rework on the capstone proper.

Lesson: the HLDD is a contract with your future self. When I hit week ten of development and could not remember why I had chosen ColabFold over local OpenFold, the HLDD told me. Write it as if you will be the one reading it later, because you will be.

COS660-O – Software Project: Development I  (Term C202602)

Development I is where the platform actually started getting built. ESMFold integration, the Flask scaffold, the database schema, and the first end-to-end prediction pipeline. The course pace is one milestone per week with weekly advisor check-ins. That cadence works if you treat it as a forcing function. It does not work if you let weeks slip by, because next week’s milestone assumes last week’s is done.

My biggest lesson from this course: when something is broken, debug the actual subprocess invocation, not the assumption about which Python is being called. The OmegaFold silent-failure bug (returning 0 of 50 results with no error output) cost me real time. Once I traced the subprocess to the wrong conda environment, the fix was thirty seconds. Lesson learned for the rest of the program.

COS670-O – Software Project: Development II  (Term C202603)

Development II is when the platform stopped being three loosely connected predictors and became a real benchmarking system. Statistical analysis, fold-class ANOVA, the verification harness, and the three benchmark runs that progressively expanded coverage from partial to 50/50/50. This is also the course in which the methodology mattered most. Running a benchmark, looking at the result, realizing the result was an artifact of incomplete coverage, running a fuller benchmark, and finding the truth had changed. That iterative discipline is what separates a real benchmark from a press release.

Practical advice: write the verification harness early. The independent verification harness that produced the 268/270 pass rate in the conditional pass package was the single most important asset for resolving the committee’s concerns. If I had built it on day one of Development II instead of week three, I would have caught the T1099 timing bug a month earlier.

COS680-O – Software Project: Deployment and Professional Presentation  (Term C202604 (current))

This is the course producing this post, and it is the final stretch. The deliverables are the deployment artifacts, the portfolio updates, the defense itself, and the program retrospective. It is the course where everything you have built gets packaged for an audience: the committee, your portfolio, future employers, future doctoral programs. The deliverables here are not new technical work as much as they are the framing of work already done. That framing matters more than I expected. A capstone that is well-built but poorly presented does not get the credit it deserves.

Specific lesson from this month: when your committee gives you a conditional pass, do not panic and do not over-defend. Listen to the conditions, build a supplemental package that addresses each one directly, and ship it. My package included a validation paper, regenerated benchmark JSONs, an independent verification script with 270 checks, and a clean response letter to each committee question. The pass was converted to full within a week of submission.

GRAD4000-O – Graduation Launch  (Term C202604 (current))

This is the administrative course that runs alongside the final capstone month. It covers graduation logistics, alumni resources, and the transition out of the program. There is no technical content, but the course is genuinely useful for what it is: making sure the paperwork side of the degree does not become a problem at the finish line.

Program Feedback: What Worked and What Did Not

What worked

The cohort structure and one-month course pace force a working professional to ship deliverables on a schedule. I built FastFold Suite while working full-time as a Technology Specialist at Manteno CUSD 5 (managing IT and AV infrastructure for 2,000+ users across six buildings) and part-time with AVI-SPL VideoLink in event production. The accelerated format made that possible. A traditional semester schedule would not have.

The advisor relationship was the single biggest factor in the capstone’s quality. Dr. Andreas Marpaung met with me weekly throughout both capstone phases, gave specific actionable feedback on every deliverable, and ran a full-time presentation rehearsal with me before the defense. The committee members raised real and important questions, and the conditional pass with a clear path to resolution was the right outcome for the right reasons. I would not change anything about how the defense process worked.

The IEEE paper format requirement across courses is one of the program’s quiet superpowers. By the time you reach the capstone, you have written eight or ten IEEE papers. The capstone thesis is just one more, but at a higher standard. Programs that do not enforce this format have students learning the IEEE format for the first time during the capstone, which is the wrong moment to learn it.

What I would change

Course quality across the ten content courses was uneven. The strongest courses (Research Approaches, Advanced Software Engineering, Data Science, and Advanced AI) had instructors who engaged substantively with discussion posts, returned feedback within a few days, and treated graduate work as such. Other courses received feedback that felt more like a set of rubric checkboxes than genuine engagement. When the program is one month per course, a slow turnaround on feedback means feedback that arrives after the course is over and has no chance to improve the next deliverable.

Specific suggestion: a feedback SLA. Twenty-four to forty-eight hour turnaround on graded deliverables, especially during the back half of each course. The current pace makes late feedback functionally useless.

Curriculum suggestion: more emphasis on independent verification and reproducibility in the data-heavy courses. The conditional pass on my defense came down to verification: the committee wanted independent confirmation that my numbers reproduced from raw data using peer-reviewed external tools (SciPy, NumPy, the published model papers). That kind of work is the gold standard in computational research, and it should be taught earlier than the capstone. A two-week module on reproducibility and verification harnesses, slotted into Data Science or Machine Learning, would make the capstone smoother for everyone.

Was the feedback timely and meaningful?

Honest answer: it varied. The capstone advisor’s feedback was excellent throughout. The content-course feedback ranged from genuinely useful to perfunctory. When I got specific, line-level feedback from an instructor on a paper draft, it materially improved my work. When I got a numeric grade with no comment, it did not.

Did the quality of material and instruction meet expectations?

Mostly yes. The technical content held up to graduate-level scrutiny in the strong courses. The quality of instruction was uneven, and that is the area where the program has the most to gain by tightening standards.

For Students Considering the CSMS or Starting Their Capstone

If you are considering enrolling, the program rewards consistent, steady work and punishes procrastination ruthlessly. One month per course leaves no slack. If you are someone who can produce a deliverable per week without external pressure, this format is a force multiplier. If you need the structure of long deadlines and synchronous classes, look elsewhere.

If you are about to start the capstone, pick a project that you would still want to work on after twelve weeks of grinding on it. The capstone is long, intense, and requires you to keep showing up after the novelty wears off. I picked protein structure prediction because I had already spent a year reading AlphaFold papers for fun. That intrinsic interest is what got me through the OmegaFold silent failures, the JAX dependency conflicts, the CrAss phage sequence assignment bug, and the T1099 timeout the night before the JSON regeneration deadline.

Build your verification harness early. Write your HLDD as a contract with your future self. Treat your weekly advisor check-ins as the most important meeting on your calendar that week. Send your committee the slides and the thesis at least 48 hours before the defense. Read your committee’s feedback as a roadmap, not a verdict. And when you defend, breathe. You have done the work. The defense is just the explanation.

Closing

The MSCS is not the end of the road for me. I start Purdue’s Doctor of Technology (DTech) program in Fall 2026 with a 30-credit blanket transfer from this master’s, working toward a dissertation on technology self-efficacy in non-technical staff using Bandura’s framework. FastFold Suite is heading toward a journal submission and may continue as a research track in the doctoral work.

None of that would be on the table without the foundation this program built. The thesis is a real piece of work. The platform is a real piece of work. The verification methodology is a real piece of work. They exist because the program demanded that they exist, and because the people who taught me, advised me, and challenged me in the defense made sure they would withstand scrutiny. That is exactly what graduate education is supposed to do.

Tech Releases | Cameron Ashby | A7 Consulting

Appendix A: AI Usage Documentation

This document was developed with the assistance of AI-powered tools for writing quality assurance. Grammarly, an AI-driven writing assistant, was used throughout the drafting process to identify and correct grammatical errors, improve sentence clarity, and ensure a consistent academic tone. Grammarly’s suggestions were reviewed and accepted or rejected on a case-by-case basis. No content was generated solely by the tool. All research, analysis, system design, implementation, and intellectual contributions in this document are the original work of the author.

Leave a comment

Trending