The Dashboard Migration Button Won't Save Your Snowflake Credits
- Digital Hive

- 3 days ago
- 3 min read

Snowflake has made it official. On April 20, 2026, the creation of new Snowsight dashboards was permanently disabled across all accounts. On June 22, 2026, weeks from now, Legacy Dashboards will be fully removed from Snowsight. No access. No fallback. Gone.
If you are still running legacy dashboards, you are not in a grey area. You are on a hard deadline with no exceptions.

The obvious escape hatch is Snowflake's built-in "Generate Streamlit app" button: open your dashboard, click once, done. But at best it produces equal compute to what you had. At worst, with multiple concurrent users, it can cost you more. The default warehouse runtime spins up a separate app instance per viewer, where a legacy dashboard would have served them from a shared result. This post looks at what that means in practice and what to write instead.
What the Button Produces
The conversion tool does a 1-to-1 translation. Every dashboard tile becomes its own session.sql().to_pandas() call. The SQL is preserved verbatim. There is no caching, no in-memory filtering. A five-tile dashboard becomes five separate warehouse queries on every single page load and on every filter change.
# What the button generates
session = get_active_session()
df1 = session.sql("SELECT region, SUM(sales_amount) ... GROUP BY region").to_pandas()
st.bar_chart(df1) # warehouse hit #1
df2 = session.sql("SELECT COUNT(DISTINCT customer_id) ...").to_pandas()
st.metric("Customers", df2['UNIQUE_CUSTOMERS'][0]) # warehouse hit #2
df3 = session.sql("SELECT DATE_TRUNC('month', order_date), SUM(sales_amount) ... GROUP BY 1").to_pandas()
st.line_chart(df3) # warehouse hit #3 Three tiles. Three warehouse spin-ups on load. Three more on every filter interaction. You have migrated the format, not the problem.
The Right Approach: One Query, Everything in Memory
Write one broader query, cache it with @st.cache_data, and build all your charts from that single DataFrame. Filter interactions operate on the cached data in memory with zero additional warehouse invocations.
# The efficient approach
session = get_active_session()
@st.cache_data(ttl=3600) # warehouse hit once per hour
def load_data():
return session.sql("""
SELECT order_date, region, sales_amount, customer_id
FROM production.sales.fct_orders
WHERE order_date >= DATEADD(year, -1, CURRENT_DATE())
""").to_pandas()
df = load_data()
# In-memory filter — no warehouse cost
selected_region = st.sidebar.selectbox("Region", ["All"] + df['REGION'].unique().tolist())
filtered = df[df['REGION'] == selected_region] if selected_region != "All" else df
# All visuals from the same cached DataFrame
col1, col2 = st.columns(2)
col1.metric("Total Sales", f"${filtered['SALES_AMOUNT'].sum():,.2f}")
col2.metric("Active Customers", f"{filtered['CUSTOMER_ID'].nunique():,}")
trend = filtered.groupby(filtered['ORDER_DATE'].dt.to_period('M'))['SALES_AMOUNT'].sum().reset_index()
trend['ORDER_DATE'] = trend['ORDER_DATE'].dt.to_timestamp()
st.line_chart(trend, x='ORDER_DATE', y='SALES_AMOUNT') Same three visuals. One warehouse query per hour. Every filter is instant and free.
The ttl=3600 parameter is the key control lever. It determines how long the DataFrame lives in memory before the warehouse is queried again. Match it to your pipeline's update frequency: overnight batch jobs can tolerate a longer TTL, operational dashboards may need a shorter one.
Button vs. Manual
| Generate Streamlit App | Manual Approach |
Warehouse hits on load | One per tile | One total |
Warehouse hits on filter | One per tile, every time | Zero |
Caching | None | @st.cache_data |
Time to migrate | Minutes | Hours |
Credit efficiency | Same as legacy | Significantly reduced |
Which Path Should You Take?
The two approaches are not mutually exclusive. If June 22 is bearing down on you, we suggest you approach the transition to Streamlit in to phases:
Phase 1: Use the button to meet the deadline. Convert everything before June 22. The generated apps will work and your data stays accessible.
Phase 2: Refactor what matters. After the deadline, audit your converted apps. Dashboards with the most tiles, the highest traffic, or the heaviest SQL are your refactoring priorities. For each one, consolidate the tile queries into a single cached base query. Not every dashboard is worth the effort. A low-traffic internal report with two simple tiles is not. A customer-facing dashboard with daily active users is.
Conclusion
Snowflake's migration button solves one problem: keeping you off a deprecated feature before June 22. It does not fix the compute inefficiency that will make your legacy dashboards expensive.
For the full technical reference, see the official Snowflake Streamlit documentation.

Written by Aslan Hattukai
Data Engineer




Comments