postgresql - A slow sql statments , is there any way to optmize it? -
our application has slow statement, takes more 11 second, want know there way optimize ?
the sql statement
select id mapfriends.cell_forum_topic id in ( select topicid mapfriends.cell_forum_item skyid=103230293 group topicid ) , categoryid=29 , hidden=false order restoretime desc limit 10 offset 0; id --------- 2471959 2382296 1535967 2432006 2367281 2159706 1501759 1549304 2179763 1598043 (10 rows) time: 11444.976 ms
plan
friends=> explain select id friends.cell_forum_topic id in ( friends(> select topicid friends.cell_forum_item skyid=103230293 group topicid) friends-> , categoryid=29 , hidden=false order restoretime desc limit 10 offset 0; query plan ------------------------------------------------------------------------------------------------------------------------------- limit (cost=1443.15..1443.15 rows=2 width=12) -> sort (cost=1443.15..1443.15 rows=2 width=12) sort key: cell_forum_topic.restoretime -> nested loop (cost=1434.28..1443.14 rows=2 width=12) -> hashaggregate (cost=1434.28..1434.30 rows=2 width=4) -> index scan using cell_forum_item_idx_skyid on cell_forum_item (cost=0.00..1430.49 rows=1516 width=4) index cond: (skyid = 103230293) -> index scan using cell_forum_topic_pkey on cell_forum_topic (cost=0.00..4.40 rows=1 width=12) index cond: (cell_forum_topic.id = cell_forum_item.topicid) filter: ((not cell_forum_topic.hidden) , (cell_forum_topic.categoryid = 29)) (10 rows) time: 1.109 ms
indexes
friends=> \d cell_forum_item table "friends.cell_forum_item" column | type | modifiers ---------+--------------------------------+-------------------------------------------------------------- id | integer | not null default nextval('cell_forum_item_id_seq'::regclass) topicid | integer | not null skyid | integer | not null content | character varying(200) | addtime | timestamp(0) without time zone | default now() ischeck | boolean | indexes: "cell_forum_item_pkey" primary key, btree (id) "cell_forum_item_idx" btree (topicid, skyid) "cell_forum_item_idx_1" btree (topicid, id) "cell_forum_item_idx_skyid" btree (skyid) friends=> \d cell_forum_topic table "friends.cell_forum_topic" column | type | modifiers -------------+--------------------------------+------------------------------------------------------------------------------------- - id | integer | not null default nextval(('"friends"."cell_forum_topic_id_seq"'::text)::regclass) categoryid | integer | not null topic | character varying | not null content | character varying | not null skyid | integer | not null addtime | timestamp(0) without time zone | default now() reference | integer | default 0 restore | integer | default 0 restoretime | timestamp(0) without time zone | default now() locked | boolean | default false settop | boolean | default false hidden | boolean | default false feature | boolean | default false picid | integer | default 29249 managerid | integer | imageid | integer | default 0 pass | boolean | default false ischeck | boolean | indexes: "cell_forum_topic_pkey" primary key, btree (id) "idx_cell_forum_topic_1" btree (categoryid, settop, hidden, restoretime, skyid) "idx_cell_forum_topic_2" btree (categoryid, hidden, restoretime, skyid) "idx_cell_forum_topic_3" btree (categoryid, hidden, restoretime) "idx_cell_forum_topic_4" btree (categoryid, hidden, restore) "idx_cell_forum_topic_5" btree (categoryid, hidden, restoretime, feature) "idx_cell_forum_topic_6" btree (categoryid, settop, hidden, restoretime)
explain analyze
mapfriends=> explain analyze select id mapfriends.cell_forum_topic mapfriends-> join (select topicid mapfriends.cell_forum_item skyid=103230293 group topicid) tmp mapfriends-> on mapfriends.cell_forum_topic.id=tmp.topicid mapfriends-> categoryid=29 , hidden=false order restoretime desc limit 10 offset 0; query plan ------------------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------- limit (cost=1446.89..1446.90 rows=2 width=12) (actual time=18016.006..18016.013 rows=10 loops=1) -> sort (cost=1446.89..1446.90 rows=2 width=12) (actual time=18016.001..18016.002 rows=10 loops=1) sort key: cell_forum_topic.restoretime sort method: quicksort memory: 25kb -> nested loop (cost=1438.02..1446.88 rows=2 width=12) (actual time=16988.492..18015.869 rows=20 loops=1) -> hashaggregate (cost=1438.02..1438.04 rows=2 width=4) (actual time=15446.735..15447.243 rows=610 loops=1) -> index scan using cell_forum_item_idx_skyid on cell_forum_item (cost=0.00..1434.22 rows=1520 width=4) (actual time=302.378..15429.782 rows=7133 loops=1) index cond: (skyid = 103230293) -> index scan using cell_forum_topic_pkey on cell_forum_topic (cost=0.00..4.40 rows=1 width=12) (actual time=4.210..4.210 rows=0 loops=610) index cond: (cell_forum_topic.id = cell_forum_item.topicid) filter: ((not cell_forum_topic.hidden) , (cell_forum_topic.categoryid = 29)) total runtime: 18019.461 ms
could give more information tables (the statistics) , configuration?
select version(); select category, name, setting pg_settings name in('effective_cache_size', 'enable_seqscan', 'shared_buffers'); select * pg_stat_user_tables relname in('cell_forum_topic', 'cell_forum_item'); select * pg_stat_user_indexes relname in('cell_forum_topic', 'cell_forum_item'); select * pg_stats tablename in('cell_forum_topic', 'cell_forum_item');
and before getting data, use analyze.
it looks have problem index, query spends it's time:
-> index scan using cell_forum_item_idx_skyid on cell_forum_item (cost=0.00..1434.22 rows=1520 width=4) (actual time=302.378..15429.782 rows=7133 loops=1)
if use vacuum full on regular basis (not recommended!), index bloat might problem. reindex might idea, sure:
reindex table cell_forum_item;
and talking indexes, can drop couple of them, these obsolete:
"idx_cell_forum_topic_6" btree (categoryid, settop, hidden, restoretime) "idx_cell_forum_topic_3" btree (categoryid, hidden, restoretime)
other indexes have same data , can used database well.
it looks have couple of problems:
- autovacuum turned off or it's way behind. last autovacuum on 2010-12-02 , have 256734 dead tuples in 1 table , 451430 dead ones in other.... have this, serious problem.
- when autovacuum working again, have vacuum full , reindex force table rewrite , rid of empty space in tables.
- after fixing vacuum-problem, have analyze well: database expects 1520 results gets 7133 results. problem statistics, maybe have increase statistics.
- the query needs rewriting well: gets 7133 results needs 610 results. on 90% of results lost... , getting these 7133 takes lot of time, on 15 seconds. rid of subquery using join without group or use exists, without group by.
but first autovacuum on track, before new or other problems.
Comments
Post a Comment