Wednesday, March 7, 2012

is replication session a transaction?

I am curious if a "session" for merge replication is a transaction. If I
see 100 actions in the Agent History and the last one is "the process could
not enumerate changes at the subscriber" due to General Network Error
(dropped connection or whatever), did those 10000 data changes get
committed or were they all rolled back and the next agent run will start
that all over again.
Assuming the negative possibility, what profile parameter would I change to
reduce the number of rows before a commit? Our agents for low-bandwidth
subscribers are running for hours and hours and never seem to catch up
because they always eventually lose the connection.
Thanks for any insights.
They are committed as singletons. So a batch of 100 (by default or whatever
is in the Upload|DownloadWriteChangesPerBatch) is tried and if errors
occurred while applying a batch the error rows go into a retry look which is
tried after the batch completes.
Should an network error occur during a synchronization, the merge agent will
look at the last batch successfully applied and then start to apply the next
one again. There is a possiblility that should 99 of the rows in the last
batch make it through, all 99 will be tried again to push the last one
through which was not successful.
You will notice that the slow link profile drops these values from 50 to 1
(or 5 for UploadGenerationsPerBatch).
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Anachostic" <anachostic@.remove.700cb.net> wrote in message
news:Xns995CA07258BD9anachostic@.207.46.248.16...
>I am curious if a "session" for merge replication is a transaction. If I
> see 100 actions in the Agent History and the last one is "the process
> could
> not enumerate changes at the subscriber" due to General Network Error
> (dropped connection or whatever), did those 10000 data changes get
> committed or were they all rolled back and the next agent run will start
> that all over again.
> Assuming the negative possibility, what profile parameter would I change
> to
> reduce the number of rows before a commit? Our agents for low-bandwidth
> subscribers are running for hours and hours and never seem to catch up
> because they always eventually lose the connection.
> Thanks for any insights.

No comments:

Post a Comment