9:30am CDT N214B Capstone OL-Team2 Title:So you want to be a restaurant owner? Project Description: Our project aims to provide valuable insights into the restaurant industry by analyzing key business attributes across four major cities including Austin, Texas; Chicago, Illinois; New York City, New York; and Los Angeles, California. We will provide insights through the analysis of key factors including health inspection scores, location, customer reviews, time series analysis spanning the past five years, and additional information collected from census data. This data will be collected through web scraping, official city resources, and the Census Bureau. Our goal is to identify patterns for successful restaurant creation or expansion and offer actionable insights for restaurant owners to make informed business decisions regarding improvements. Our analysis will culminate in a business-focused dashboard which will serve as a tool to observe our findings and help restaurant owners optimize their strategies. Master Students: Reid Lawson, Kevin Sherer, Ryan Russell, Daniel Bassett 10:00am CDT N214B Capstone OC1-Team1 Title: Detecting Bias in Missouri News Data Using NLP and Machine Learning Project Description: This project focuses on detecting implicit bias in Missouri news articles using advanced Natural Language Processing (NLP) and machine learning techniques. The dataset consists of ~100,000 news articles provided from various sources, supplemented with bias-labeled datasets from AllSides and NewsMediaBias-Plus. Our approach involves comprehensive text preprocessing (tokenization, lemmatization, and POS tagging), feature engineering (TF-IDF, n-grams, sentiment scores), and embedding-based representations using transformer models (BERT, RoBERTa, and Sentence-BERT). Bias detection is formulated as a classification and clustering problem, leveraging DBSCAN, HDBSCAN, and K-Means for clustering, alongside dimensionality reduction techniques (UMAP, t-SNE, PCA). We fine-tune large language models (Claude, LLaMA 3.2, Mistral-7B) for classification, with model performance evaluated using precision, recall, and F1-score. Explainability methods ensure interpretability of bias indicators. This work aims to provide industry-relevant insights into algorithmic bias detection and media transparency at scale. Master Students: Gulli Atakishiyeva, Piyusha Modhave, Tarun Kumar