House_price

๐Ÿก House Price Prediction Web App

๐Ÿ“Œ About the Project

This project is a complete end-to-end House Price Prediction Web Application built using Machine Learning and a real-world dataset inspired by the Gurgaon housing market. It began as a classic regression problem but quickly evolved into a multi-functional product that combines:

The goal was to go beyond the typical ML regression project and create a solution that mirrors a real estate tech product โ€” helpful for both buyers and analysts.


๐Ÿ™๏ธ Real-World Focus: Gurgaon Housing Market

To give this project a realistic business edge, the focus was narrowed down to Gurgaon (Gurugram) โ€” one of Indiaโ€™s rapidly developing cities with a highly organized real estate sector. The cityโ€™s sector-based structure made it ideal for analyzing price trends and regional comparisons.

To gather meaningful insights and define the data schema, I referred to property listings from 99acres.com. This helped in:

This grounding in real-world data makes the app much more than a data science demo โ€” itโ€™s a prototype for a practical solution.


โœ… Project Features

  1. ๐Ÿ  Price Predictor
    A trained regression model that estimates property prices based on location, size, furnishing, and other features.

  2. ๐Ÿง  Recommendation System
    Recommends similar houses based on user preferences (area, price range, BHK, etc.).

  3. ๐Ÿ“Š Analytics & Visual Insights
    Interactive plots and heatmaps to show pricing trends across sectors, popular property types, and more.

  4. ๐Ÿ–ฅ๏ธ Web Application
    Developed using Flask, HTML/CSS/JS, and integrated with ML models โ€” offering a sleek, responsive user interface.


๐Ÿšง Development Workflow

The project followed a structured pipeline to ensure accuracy, explainability, and usability. Below is a breakdown of each key stage:


๐Ÿ“ฅ Stage 1: Data Gathering

Collected structured property data focused on Gurgaon. The dataset includes features like:

Data inspiration and schema were designed by referencing real listings from 99acres.com.


๐Ÿงน Stage 2: Data Cleaning

Performed thorough preprocessing to handle:


๐Ÿง  Stage 3: Feature Engineering


๐Ÿ“Š Stage 4: Exploratory Data Analysis (EDA)

Generated:


๐Ÿšจ Stage 5: Outlier Removal


๐Ÿงฉ Stage 6: Missing Value Imputation


๐Ÿ“Œ Stage 7: Feature Selection


๐Ÿงช Stage 8: Model Selection & Hyperparameter Tuning

Tried multiple regression models:

Best model selected based on Rยฒ, RMSE, and cross-validation.
Tuned using GridSearchCV and RandomizedSearchCV.


๐Ÿ“ˆ Stage 9: Analytics & Insight Module

Built a powerful analytics dashboard for exploring:

Interactive plots were made using Plotly, Seaborn, and Matplotlib.


๐Ÿค– Stage 10: Recommendation System

A lightweight content-based system suggesting similar houses based on:

Built using cosine similarity and custom filtering logic.


๐ŸŒ Stage 11: Web App Development


๐Ÿš€ Stage 12: Deployment

Deployed the app using:


๐Ÿ“ฆ Stage 1: Data Collection โ€“ The Foundation of the Project

The first step of the project was to gather real-world data that reflects the actual property market. Since the objective was to predict house prices and recommend similar listings, I chose Gurugram (Gurgaon) โ€” a city where properties are well-structured across sectors, making it suitable for spatial and categorical analysis.

๐Ÿ” Data Source

I collected data from 99acres.com, a popular Indian real estate listing platform.

๐Ÿงฐ Tools Used

๐Ÿ˜๏ธ What I Scraped

I scraped flats, houses, and apartment listings, ensuring the dataset covered all types of residential properties. For each listing, I collected details such as:

property_data = {
    'property_name': ...,  
    'link': ...,  
    'society': ...,  
    'price': ...,  
    'area': ...,  
    'areaWithType': ...,  
    'bedRoom': ...,  
    'bathroom': ...,  
    'balcony': ...,  
    'additionalRoom': ...,  
    'address': ...,  
    'floorNum': ...,  
    'facing': ...,  
    'agePossession': ...,  
    'nearbyLocations': ...,  
    'description': ...,  
    'furnishDetails': ...,  
    'features': ...,  
    'rating': ...,  
    'property_id': ...
}

โš ๏ธ Real Challenges Faced


๐Ÿ”„ Final Dataset Preparation


๐Ÿงน Stage 2: Data Cleaning โ€“ From Raw to Reliable

Before building any machine learning model, the foundation lies in cleaning and understanding the data. Raw real estate data is often inconsistent, messy, and incomplete โ€” so I took this stage seriously and applied both domain knowledge and logic to build a high-quality dataset.


๐Ÿ› ๏ธ What Cleaning Involved:

โœ… Initial Manual Checks in Excel

โœ… Scripted Cleaning in Jupyter
After initial Excel-based cleanup, both datasets were loaded into Python for a deeper, programmable cleaning process.
Hereโ€™s what was handled:


๐Ÿ”Ž Smart & Logical Cleaning Decisions

Here are some specific actions I took using both research and common sense:


๐Ÿ“„ Final Columns After Cleaning

After thorough filtering, renaming, parsing, and formatting, the final cleaned dataset was saved as:
๐Ÿ“ gurgaon_properties.csv

It contains the following clean, usable columns:

['property_name', 'property_type', 'society', 'price', 'price_per_sqft',
 'area', 'areaWithType', 'bedRoom', 'bathroom', 'balcony',
 'additionalRoom', 'address', 'floorNum', 'facing', 'agePossession',
 'nearbyLocations', 'description', 'furnishDetails', 'features',
 'rating', 'noOfFloor']

๐Ÿ“Š Final Dataset Summary



Stage 3:๐Ÿ”ง Feature Engineering

After data cleaning, the dataset was enhanced through careful feature engineering โ€” a crucial step to convert raw, unstructured values into meaningful and predictive features. This process combined domain knowledge with analytical techniques to enrich the data and boost model performance.

๐Ÿง  Key Highlights


๐Ÿ’Ž Luxury Score & Clustering

To capture the lifestyle and luxury quotient of each property, a scoring and clustering system was developed:

These engineered features allowed the model to distinguish properties not just by size or price, but by lifestyle value โ€” making the dataset ready for sophisticated modeling and pricing recommendations.


๐Ÿ“ The final dataset from this stage was saved as:
gurgaon_properties_featured.csv


๐Ÿง  Stage 4: Exploratory Data Analysis (EDA) โ€“ Gurgaon Properties ๐Ÿก

After comprehensive feature engineering, a detailed Exploratory Data Analysis (EDA) was conducted to understand the underlying patterns, distributions, and relationships in the Gurgaon real estate dataset.

This EDA is categorized into three key notebooks:


๐Ÿ“Œ Univariate Analysis Highlights

๐Ÿ˜๏ธ Property and Society Distribution

๐Ÿ—บ๏ธ Sector-wise Listings

๐Ÿ’ฐ Price Distribution

๐Ÿ“ฆ Built-up and Super Area

๐Ÿงฑ Floor Distribution


๐Ÿ” Multivariate Analysis Highlights

๐Ÿ“Š Property Type vs Price & Area

๐Ÿ›๏ธ Bedrooms and Property Type Correlation

๐Ÿ“ˆ Area vs Price

โณ Possession Age vs Price

๐ŸŒ Sector-Level Price Heatmap


๐Ÿ”ง Data Transformation


โš ๏ธ Key Takeaways


๐Ÿก Stage 5: Outlier Detection & Removal

Outliers can distort data insights and negatively impact the performance of machine learning models. This phase focuses on detecting and handling outliers in key real estate features such as price, price per sqft, and area-to-room ratio to ensure data consistency, accuracy, and interpretability.


๐Ÿ“Œ Objectives


๐Ÿ” Why Outlier Detection Matters

Outliers can arise from:

โ— Effects of Outliers:


โš™๏ธ Steps Followed

1. Outlier Detection in price Column โœ… (More Treatment Done Here)

Manual Review & Action:

๐Ÿ“ˆ Result: Cleaner and more realistic price distribution.


2. Outlier Detection in price_persqft

Steps:

Preserved:

๐Ÿ“ˆ Result: Distribution normalized to reflect realistic per sqft prices.


3. Area-to-Room Ratio Validation

Used architectural logic to check:

Steps:

๐Ÿ—๏ธ Result: Architecturally consistent dataset.


4. Final Refinement

Kept:

Removed:


โœ… Outcome Summary

Category Action Taken
price IQR method + manual correction
price_persqft IQR method + logical filtering
Area-to-Room Ratio Domain rule-based validation
Total Rows Dropped ~400+ (after thorough analysis)
Final Dataset Clean, consistent, and ML-ready

๐Ÿ“ Files & Artifacts



๐Ÿงฉ Stage 6: Missing Value Imputation

In this stage, missing values werenโ€™t just cleaned โ€” they were understood. I leveraged real-world relationships, logical imputation strategies to impute values with confidence and maintain the integrity of the Gurgaon Real Estate dataset.


๐Ÿ“‰ Overview of Missing Data

Feature Missing Values
balcony 0
floorNum 17
facing 1,011
super_built_up_area 1,680
built_up_area 1,968
carpet_area 1,715
agePossession 0 (but 291 โ€œUndefinedโ€)

๐Ÿงฑ Imputing built_up_area โ€“ Based on 530 Valid Samples

๐Ÿ” Step 1: Ratio Derivation

From 530 rows where carpet_area, built_up_area, and super_built_up_area were all present, I derived realistic ratios:

Ratio Value Meaning
carpet_area / built_up_area โ‰ˆ 0.90 Carpet is ~90% of built-up area
super_built_up_area / built_up_area โ‰ˆ 1.105 Super built-up is ~110.5% of built-up

๐Ÿ“ Step 2: Smart Estimation Logic

Available Columns Estimation Formula
Carpet + Super Average of (carpet_area / 0.9) and (super_built_up_area / 1.105)
Only Carpet built_up_area = carpet_area / 0.9
Only Super built_up_area = super_built_up_area / 1.105

๐ŸŽฏ Result: All 1,968 missing values were filled logically and confidently.


๐Ÿข Imputing floorNum โ€“ Grouped by Context

The 17 missing values in floorNum were mainly from properties labeled as โ€œHouseโ€.

โœ… Strategy:

๐ŸŽฏ Result: Imputed floor levels were contextually valid and realistic.


๐Ÿก Handling agePossession โ€“ Replacing โ€œUndefinedโ€

Although not NaN, 291 rows had "Undefined" in agePossession.

โœ… Strategy:

  1. Grouped by sector + property_type โ†’ Imputed with mode.
  2. If unavailable, used mode within sector only.
  3. Assigned age categories like:
    • โ€œNewly Constructedโ€
    • โ€œRelatively Newโ€
    • โ€œModerately Oldโ€

๐ŸŽฏ Result: All 291 โ€œUndefinedโ€ entries were replaced with meaningful labels.


โœ… Final Outcome

Feature Missing Before Missing After Imputation Strategy
built_up_area 1,968 0 Ratio-based estimation using 530 rows
floorNum 17 0 Median from location-type grouping
agePossession 291 (โ€œUndefinedโ€) 0 Mode-based contextual replacement

๐Ÿง  Key Highlights



๐Ÿง  Stage 7: Feature Selection

Feature selection is a vital part of the machine learning pipeline, helping models focus on the most informative inputs. In this stage, we took a statistically and mathematically driven approach to find and finalize the most important features influencing property prices in Gurgaon.


๐Ÿšซ Step 1: Dropping Irrelevant Features

We dropped the following features before starting selection:


๐ŸŽฏ Step 2: Creating User-Friendly Categorical Features

To enhance interpretability and usability, we transformed numerical features into categorical representations:

โœ… Luxury Score โ†’ luxury_category

A score was generated based on various property attributes and categorized as:

def categorize_luxury(score):
    if 0 <= score < 50:
        return "Low"
    elif 50 <= score < 150:
        return "Medium"
    elif 150 <= score <= 175:
        return "High"
    else:
        return None

โœ… Floor Number โ†’ floor_category

The floor number was converted to floor type to enhance user understanding:

def categorize_floor(floor):
    if 0 <= floor <= 2:
        return "Low Floor"
    elif 3 <= floor <= 10:
        return "Mid Floor"
    elif 11 <= floor <= 51:
        return "High Floor"
    else:
        return None

๐Ÿงช Step 3: Multi-Technique Feature Selection (The Main Act ๐ŸŽฌ)

Instead of relying on a single method, we used 8 feature selection techniques, treating each as an expert. The final selection was based on the average importance across all techniques.

โœ… Techniques Used:

  1. Correlation Coefficient
  2. Random Forest Feature Importance
  3. Gradient Boosting Importance
  4. Permutation Importance
  5. Lasso Regression Coefficients
  6. Recursive Feature Elimination (RFE)
  7. Linear Regression Coefficients
  8. SHAP (SHapley Additive exPlanations)

๐Ÿ“Š Feature Importance Table (Combined View)

Feature Corr RF GBoost Perm Lasso RFE LinearReg SHAP
sector -0.21 0.102 0.103 0.246 -0.07 0.104 -0.079 0.384
bedRoom 0.59 0.024 0.038 0.041 0.014 0.028 0.017 0.050
bathroom 0.61 0.026 0.036 0.035 0.275 0.024 0.282 0.113
balcony 0.27 0.013 0.002 0.013 -0.044 0.012 -0.066 0.040
agePossession -0.13 0.015 0.004 0.013 0.000 0.014 -0.002 0.027
built_up_area 0.75 0.651 0.678 0.899 1.510 0.653 1.513 1.256
study room 0.24 0.008 0.003 0.004 0.172 0.008 0.180 0.020
servant room 0.39 0.019 0.023 0.040 0.161 0.018 0.170 0.096
store room 0.31 0.008 0.010 0.004 0.200 0.008 0.204 0.017
pooja room 0.32 0.006 0.000 0.003 0.074 0.005 0.077 0.012
others -0.01 0.003 0.000 0.002 -0.017 0.003 -0.025 0.007
furnishing_type 0.23 0.011 0.003 0.010 0.164 0.010 0.173 0.027
luxury_category 0.01 0.008 0.001 0.008 0.055 0.006 0.066 0.016
floor_category 0.04 0.007 0.000 0.007 -0.003 0.006 -0.013 0.025

๐Ÿ Final Selected Features

After averaging and evaluating all 8 techniques, the following features were selected for the final dataset:


๐Ÿ“ Output File

The final result was saved as:

gurgaon_properties_post_feature_selection.csv

Sample Row:

property_type,sector,bedRoom,bathroom,balcony,agePossession,built_up_area,servant room,store room,furnishing_type,luxury_category,floor_category,price
0.0,36.0,3.0,2.0,2.0,1.0,850.0,0.0,0.0,0.0,1.0,1.0,0.82


๐Ÿ” Stage 8: Model Selection โ€“ [A] Building the Baseline Model

This stage marks the beginning of our modeling journey. The goal was to establish a baseline modelโ€”a foundational benchmark to compare future, more complex models against.

Rather than aiming for perfection here, the focus was on:

And we did exactly that. Letโ€™s walk through it.


๐Ÿงผ Preprocessing Pipeline

To ensure fairness and consistency in model evaluation, I applied careful preprocessing:

๐Ÿ”น 1. One-Hot Encoding for Categorical Variables

Categorical features like:

โ€ฆwere converted into numerical format using One-Hot Encoding, allowing the linear model to understand them without introducing bias from arbitrary numerical mapping.


๐Ÿ”น 2. Feature Scaling

Since Linear Regression is sensitive to feature magnitudes, Standard Scaling was applied to all relevant numerical features such as:

This ensured all features contribute equally during model training.


๐Ÿ”น 3. Log Transformation on Target Variable

The target column, price, was right-skewed, meaning most properties had lower prices with a few very high outliers.

To normalize the distribution, a log transformation was applied. This helps stabilize variance and improves the linear modelโ€™s ability to generalize.


๐Ÿ—๏ธ Model Construction

All preprocessing steps were integrated into a single pipeline along with the Linear Regression model. This made the workflow efficient, clean, and reproducibleโ€”ensuring that transformations were consistently applied across training and validation splits.


๐Ÿ” Model Evaluation โ€“ K-Fold Cross Validation

To validate the model fairly, I used K-Fold Cross-Validation:


๐Ÿ“ˆ Performance Metrics

Metric Value
Rยฒ Score (Mean) 0.8845 โœ… Excellent
Rยฒ Score (Std Dev) 0.0147 ๐Ÿ” Very Stable
Mean Absolute Error 0.5324 ๐Ÿ“‰ Reasonable Error

This performance is impressive for a baseline model โ€” proving that the selected features and preprocessing strategy are strong even before introducing any model tuning or complexity.


๐Ÿ“ Notebook Reference

All work for this stage is saved in:

๐Ÿ“˜ baseline.ipynb

It contains:


๐Ÿง  Summary

โ€œYou canโ€™t improve what you donโ€™t measure.โ€

This baseline model gave us clear measurement and direction. With a score of 0.88+ out-of-the-box, it validated that our feature selection, data cleaning, and transformation logic from earlier stages were strong.



๐Ÿง  Stage 8: [B] Model Building and Hyperparameter Tuning

This stage focuses on experimenting with different preprocessing techniques and regression models to identify the most accurate and robust model for price prediction. The entire process was designed to work within a unified pipeline architecture to maintain consistency and reproducibility across experiments.


๐Ÿ”„ Encoding Strategies Explored

To understand the impact of feature encoding on model performance, I tested three types of encoding on categorical features:

Each encoding method was integrated into a full preprocessing + regression pipeline, evaluated across multiple models:


๐Ÿ” Comparative Results Summary

โœ… Ordinal Encoding


๐ŸŽฏ One-Hot Encoding


๐ŸŽฏ Target Encoding


โš™๏ธ Hyperparameter Tuning

After comparing models, Random Forest Regressor was selected for exhaustive tuning due to its strong baseline performance and general robustness.

โœ… GridSearchCV Tuning


๐Ÿ“ฆ Final Model & Pipeline Storage


๐Ÿ Final Summary

Aspect Value/Model
Best Encoding Target Encoding
Best Models Extra Trees / RF / XGBoost
Final Rยฒ Score ~0.90
Final MAE ~0.45
Total Model Trials 1280 (via GridSearchCV)
CV Strategy 10-fold

This stage reflects a significant effort in model tuning and experimentation, ensuring that the final predictive system is robust, reliable, and production-ready.

๐Ÿฅณ Hurray! The Price Predictor is Live ๐ŸŽ‰

After a long and enriching 8-stage journey, Iโ€™ve successfully built a robust and accurate property price predictor for Gurugram apartments! This model combines the best of feature engineering, model tuning, and smart encoding strategies to provide realistic price estimations.

โžก๏ธ Try it out now on the deployed website:
๐Ÿ”— https://houseing.onrender.com/
๐Ÿ’ก Enter basic apartment features and get predicted prices instantly!





๐Ÿ”ฎ Next Stage: The **Recommendation Module (A Crazy Cool Build!)**

Letโ€™s move into something even more powerful and interactive:

โœจ Apartment Recommendation System

Yes, I built not one, but three different recommendation engines โ€” all combined to suggest similar apartments with your preferences in control.


๐Ÿ™๏ธ Input Dataset

I used 247 Gurugram apartment listings, each packed with rich features like:

From the complete dataset (PropertyName, PropertySubName, NearbyLocations, LocationAdvantages, Link, PriceDetails, TopFacilities), I selected 3 core features to power recommendations:


๐Ÿง  The 3 Pillars of Recommendation

To generate accurate suggestions, I designed three separate recommendation models, each focusing on one feature of similarity:

๐Ÿ“ฆ These are visualized as a flow of 3 boxes:

  1. Location Advantage Model
  2. Price Similarity Model
  3. Facility Match Model

Each model gives its own similarity score, and together they create a hybrid recommendation score.


Here are the ASCII-style architecture diagram

    +---------------------+       +---------------------+       +---------------------+
    | Location Advantage  |       |     Price Details   |       |   Top Facilities    |
    +---------------------+       +---------------------+       +---------------------+
              \                         |                         /
               \                        |                        /
                \                       |                       /
                 \                      |                      /
                  \                     |                     /
                   \                    |                    /
                    \                   |                   /
                     \                  |                  /
                      \                 |                 /
                       \                |                /
                        \               |               /
                         \              |              /
                          \             |             /
                           \            |            /
                            \           |           /
                             \          |          /
                              \         |         /
                               \        |        /
                            +--------------------------------------+
                            |     Final Recommendation System      |
                            +--------------------------------------+ --- ---

๐Ÿ”ง How the Engine Works

Hereโ€™s the full recommendation workflow:

  1. ๐Ÿ—บ๏ธ The user enters a location (e.g., IGI Airport) and a radius.
  2. ๐Ÿ“ Apartments within that radius are filtered and displayed.
  3. ๐Ÿ  The user selects one apartment.
  4. ๐Ÿค– The system computes similarity scores from all 3 models.
  5. โž• These scores are combined with weightage to produce a final similarity score.
  6. โœ… The top 5 apartments are returned as personalized recommendations!

๐ŸŽš๏ธ Why 3 Models Instead of 1?

The modular setup allows customization of recommendation priorities:

This flexibility puts user preference in control and enhances recommendation quality.


๐Ÿ› ๏ธ Preprocessing & Regex Magic

To clean and use unstructured location data:


๐Ÿงพ Model Files for Website Integration

The backend is powered by:

๐Ÿ“Š The current weighted formula:

cosine_sim_matrix = 0.5 * cosine_sim1 + 0.8 * cosine_sim2 + 1 * cosine_sim3

โœ… In the future, this formula can be easily adjusted to prioritize any feature, giving users full control over recommendation behavior.


๐Ÿš€ Summary

You can now:

๐Ÿ”— Live it in action:
๐Ÿก Try Recommendation + Prediction Tool

This is not just a toolโ€”itโ€™s your smartest assistant for exploring homes in Gurugram! ๐Ÿ’ก๐Ÿ˜๏ธ





๐Ÿง  Real Estate Data Analytics & Visualization โ€“ Gurugram ๐Ÿ™๏ธ

One of the most insightful and interactive modules of this project is Data Analytics & Visualization, which brings the Gurugram real estate dataset to life with clean, meaningful, and crazy-good plots! These visualizations uncover deep insights about locality trends, pricing patterns, and buyer preferences โ€” all made easily understandable through visuals.

Below is a breakdown of the key visualizations and what they reveal:


๐Ÿ—บ๏ธ Geospatial Plot (Gurugram Property Map)

This plot visualizes the geographical distribution of property listings across Gurugram using Folium, an interactive mapping library in Python.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ“Š Area vs Price Scatter Plot

This scatter plot illustrates the relationship between built-up area (in sq ft) and property price (in โ‚น).

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ  BHK Distribution (Pie Chart)

This pie chart presents the distribution of properties based on BHK configuration.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ“ฆ Price Distribution by BHK (Box Plot)

This box plot compares price ranges across different BHK categories.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿงฑ Property Type Distribution (Histogram)

This plot distinguishes between Flats and Independent Houses.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


โ˜๏ธ WordCloud of Common Features

A word cloud showing the most frequently mentioned property features.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ”ฅ Geospatial Price Heatmap

This heatmap illustrates price per sq ft intensity across Gurugram sectors.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ” Sankey Chart (Property Type โ†’ Price Band)

This Sankey diagram shows the flow of property types into different price bands.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ“ฆ Price per Sqft by Sector (Box Plot)

This box plot compares price per sq ft across different sectors.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ“ˆ Average Price by Sector (Bar Chart)

A bar chart highlighting the average price of listings across sectors.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿง  Feature Importance Plot (Model Explainability)

This bar chart shows the importance of different features in the modelโ€™s price prediction.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿช‘ Furnishing Type vs Average Price

This bar chart shows the impact of furnishing type on average property price.

๐Ÿ”ง How It Works:

๐Ÿ“Œ Key Insights:

๐Ÿ“ Example:


๐Ÿ“Œ Why It Matters?

These visualizations offer:

Whether youโ€™re a homebuyer, investor, or a real estate analyst, this module is a powerful lens into Gurugramโ€™s real estate dynamics.