DWR polling after server dies

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

DWR polling after server dies

Peter Bryant
Hi.

I have a client.  Using reverse ajax.  It is polling on a long action.  The server side dies (server restarts, tomcat is killed, etc).  Server/service restarts.  The client side carries on polling until that window is closed.

Is there a mechanism to identify clients polling and to tell them to turn off reverse ajax/polling?  Either server side (preferably) or client hooks I could use.

Is this something that might be a sane default for dwr (e.g. so it stops polling when the thing it was talking to goes away).

- Peter
Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

david@butterdev.com
What version of DWR are you using? There is retry logic in place that
will continue to attempt to connect to the server -
http://directwebremoting.org/dwr/documentation/reverse-ajax/retry.html.
You should be able to set dwr.engine.setMaxRetries to kill the polling.


On 03/01/2016 07:36 PM, Peter Bryant wrote:

> Hi.
>
> I have a client.  Using reverse ajax.  It is polling on a long action.  The server side dies (server restarts, tomcat is killed, etc).  Server/service restarts.  The client side carries on polling until that window is closed.
>
> Is there a mechanism to identify clients polling and to tell them to turn off reverse ajax/polling?  Either server side (preferably) or client hooks I could use.
>
> Is this something that might be a sane default for dwr (e.g. so it stops polling when the thing it was talking to goes away).
>
> - Peter
>

Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

Mike Wilson
Administrator
In reply to this post by Peter Bryant
This is a configurable feature with f ex the possibility to notify you about
your server's offline/online status. See:
http://directwebremoting.org/dwr/documentation/reverse-ajax/retry.html

Best regards
Mike Wilson

Peter Bryant wrote:

> Hi.
>
> I have a client.  Using reverse ajax.  It is polling on a
> long action.  The server side dies (server restarts, tomcat
> is killed, etc).  Server/service restarts.  The client side
> carries on polling until that window is closed.
>
> Is there a mechanism to identify clients polling and to tell
> them to turn off reverse ajax/polling?  Either server side
> (preferably) or client hooks I could use.
>
> Is this something that might be a sane default for dwr (e.g.
> so it stops polling when the thing it was talking to goes away).
>
> - Peter

Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

Peter Bryant
In reply to this post by Peter Bryant
Hi.  I am running 3.0.1 (from December?).  

I used dwr.engine.setMaxRetries(...);

If the server goes away and doesn't come back before max retries, dwr stops polling.  That's fine.

If the server comes back before the max retries is hit, the client keeps polling.  (Actually looks like it multiplies the number or frequency of requests.  A separate issue?  Anything you can check on?)

Also the process on the server side has gone away.  So it won't have any more updates for the client.  So the client will keep polling (at what appears to be a higher rate).

Is guess I'm looking for a way for the server side to send out a request to stop polling for things that shouldn't be polling any more.

- Peter
Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

david@butterdev.com
It sounds like that could be related to:
https://directwebremoting.atlassian.net/browse/DWR-654

Doesn't the process on the server restart when the server restarts and
wouldn't you want clients that were getting updates to continue to get
them after the outage?  This is what the retry logic was designed for.  
If you simply want all polling to stop when an error is encountered,
etc. you can simply kill the poll in your pollStatusHandler, configure
the retry logic to fail fast, etc..

On 03/02/2016 02:18 PM, Peter Bryant wrote:

> Hi.  I am running 3.0.1 (from December?).
>
> I used dwr.engine.setMaxRetries(...);
>
> If the server goes away and doesn't come back before max retries, dwr stops polling.  That's fine.
>
> If the server comes back before the max retries is hit, the client keeps polling.  (Actually looks like it multiplies the number or frequency of requests.  A separate issue?  Anything you can check on?)
>
> Also the process on the server side has gone away.  So it won't have any more updates for the client.  So the client will keep polling (at what appears to be a higher rate).
>
> Is guess I'm looking for a way for the server side to send out a request to stop polling for things that shouldn't be polling any more.
>
> - Peter
>

Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

pbkwee
The heartbeats are not my issue.

The reverse ajax stuff is so that my server can send clients status
updates on long running tasks.  After the outage those tasks no longer
exist.  So there will be no updates.  The clients keep polling.  My
server will never have anything interesting to say to them.

I'm not able to easily see how I can ask them to stop.

I've tried iterating over all the scriptsessions per
ServerContextFactory.get().getAllScriptSessions() and sending a
dwr.engine.setActiveReverseAjax(false)

Issues:

1) I'd like a hook on the client and/or server side that let's me stop
further polling when the server gets a poll for something it doesn't
know about.

The server side gets a POST .../call/plainpoll/ReverseAjax.dwr it sends
a batchid (serial number), page, and scriptSessionId.  After a restart,
the server will see it has no such scriptsession id.  (It also knows
from the batchid that this should not be the first time it sees this
session).  At that point I'd like to be able to have a notification on
the client or server that lets me add application specific loging to
stop further polling where that is appropriate.

I see the pollStatusHandler is used when 1) DWR goes offline 2) DWR
comes back online 3) maxRetries has been reached.

Maybe it also needs a 'server has changed' notification.  Sent when app
server is restarted, when load balancer sends the request to a different
host, or any other situation where the server receives a poll for
something it knows nothing about.

Issue 2) Client is polling.  Server is killed for a bit and restarts.
Client polls more times/more frequently.  e.g. seeing one IP sending 80
poll requests a second.

Regards, Peter

On 3/03/16 3:45 PM, David Marginian wrote:

> It sounds like that could be related to:
> https://directwebremoting.atlassian.net/browse/DWR-654
>
> Doesn't the process on the server restart when the server restarts and
> wouldn't you want clients that were getting updates to continue to get
> them after the outage?  This is what the retry logic was designed for.
> If you simply want all polling to stop when an error is encountered,
> etc. you can simply kill the poll in your pollStatusHandler, configure
> the retry logic to fail fast, etc..
>
> On 03/02/2016 02:18 PM, Peter Bryant wrote:
>> Hi.  I am running 3.0.1 (from December?).
>>
>> I used dwr.engine.setMaxRetries(...);
>>
>> If the server goes away and doesn't come back before max retries, dwr
>> stops polling.  That's fine.
>>
>> If the server comes back before the max retries is hit, the client
>> keeps polling.  (Actually looks like it multiplies the number or
>> frequency of requests.  A separate issue?  Anything you can check on?)
>>
>> Also the process on the server side has gone away.  So it won't have
>> any more updates for the client.  So the client will keep polling (at
>> what appears to be a higher rate).
>>
>> Is guess I'm looking for a way for the server side to send out a
>> request to stop polling for things that shouldn't be polling any more.
>>
>> - Peter
>>
Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

david@butterdev.com
As I said earlier, I would bet 2 is related to  
https://directwebremoting.atlassian.net/browse/DWR-654.  We did not have
a timeout set on the heartbeat calls.  As a result while your server is
down several of them are backing up and multiple reverse ajax
connections are being initiated when the server comes back up.

Let me think about this some more.  For now, is it possible that you
could you patch this temporarily by checking the number of empty
responses you get.  For example under normal circumstances I expect you
are getting status updates from the server. After the server goes down
and comes back up you are getting no status information.  Can you kill
the poll after N number of no status updates?

On 2016-03-02 20:42, Peter Bryant wrote:

> The heartbeats are not my issue.
>
> The reverse ajax stuff is so that my server can send clients status
> updates on long running tasks.  After the outage those tasks no longer
> exist.  So there will be no updates.  The clients keep polling.  My
> server will never have anything interesting to say to them.
>
> I'm not able to easily see how I can ask them to stop.
>
> I've tried iterating over all the scriptsessions per
> ServerContextFactory.get().getAllScriptSessions() and sending a
> dwr.engine.setActiveReverseAjax(false)
>
> Issues:
>
> 1) I'd like a hook on the client and/or server side that let's me stop
> further polling when the server gets a poll for something it doesn't
> know about.
>
> The server side gets a POST .../call/plainpoll/ReverseAjax.dwr it
> sends a batchid (serial number), page, and scriptSessionId.  After a
> restart, the server will see it has no such scriptsession id.  (It
> also knows from the batchid that this should not be the first time it
> sees this session).  At that point I'd like to be able to have a
> notification on the client or server that lets me add application
> specific loging to stop further polling where that is appropriate.
>
> I see the pollStatusHandler is used when 1) DWR goes offline 2) DWR
> comes back online 3) maxRetries has been reached.
>
> Maybe it also needs a 'server has changed' notification.  Sent when
> app server is restarted, when load balancer sends the request to a
> different host, or any other situation where the server receives a
> poll for something it knows nothing about.
>
> Issue 2) Client is polling.  Server is killed for a bit and restarts.
> Client polls more times/more frequently.  e.g. seeing one IP sending
> 80 poll requests a second.
>
> Regards, Peter
>
> On 3/03/16 3:45 PM, David Marginian wrote:
>> It sounds like that could be related to:
>> https://directwebremoting.atlassian.net/browse/DWR-654
>>
>> Doesn't the process on the server restart when the server restarts and
>> wouldn't you want clients that were getting updates to continue to get
>> them after the outage?  This is what the retry logic was designed for.
>> If you simply want all polling to stop when an error is encountered,
>> etc. you can simply kill the poll in your pollStatusHandler, configure
>> the retry logic to fail fast, etc..
>>
>> On 03/02/2016 02:18 PM, Peter Bryant wrote:
>>> Hi.  I am running 3.0.1 (from December?).
>>>
>>> I used dwr.engine.setMaxRetries(...);
>>>
>>> If the server goes away and doesn't come back before max retries, dwr
>>> stops polling.  That's fine.
>>>
>>> If the server comes back before the max retries is hit, the client
>>> keeps polling.  (Actually looks like it multiplies the number or
>>> frequency of requests.  A separate issue?  Anything you can check
>>> on?)
>>>
>>> Also the process on the server side has gone away.  So it won't have
>>> any more updates for the client.  So the client will keep polling (at
>>> what appears to be a higher rate).
>>>
>>> Is guess I'm looking for a way for the server side to send out a
>>> request to stop polling for things that shouldn't be polling any
>>> more.
>>>
>>> - Peter
>>>
Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

Mike Wilson
Administrator
In reply to this post by pbkwee
I've done similar stuff as you, using reverse ajax to deliver results of
long-running operations. My solution used client-side JavaScript to switch
setActiveReverseAjax() on and off to only have it running while there were
outstanding replies. I kept a data structure on the client-side with every
ongoing operation and its callbacks so I could notify on completion. I also
used this data to fail the operations after a longish timeout, which would
allow reverse ajax polls to be switched off.

I think your solution could benefit from this model, ie letting the client
keep track of open calls and possibly having it "sync up" with the server at
times if you want to update things across server restarts. I think the stuff
you are doing should be considered application-level so your application
code needs to be aware of it either way (more logic than what the DWR comm
channel will take care of).

Best regards
Mike Wilson

Peter Bryant wrote:

>
> The heartbeats are not my issue.
>
> The reverse ajax stuff is so that my server can send clients status
> updates on long running tasks.  After the outage those tasks
> no longer
> exist.  So there will be no updates.  The clients keep polling.  My
> server will never have anything interesting to say to them.
>
> I'm not able to easily see how I can ask them to stop.
>
> I've tried iterating over all the scriptsessions per
> ServerContextFactory.get().getAllScriptSessions() and sending a
> dwr.engine.setActiveReverseAjax(false)
>
> Issues:
>
> 1) I'd like a hook on the client and/or server side that
> let's me stop
> further polling when the server gets a poll for something it doesn't
> know about.
>
> The server side gets a POST
> .../call/plainpoll/ReverseAjax.dwr it sends
> a batchid (serial number), page, and scriptSessionId.  After
> a restart,
> the server will see it has no such scriptsession id.  (It also knows
> from the batchid that this should not be the first time it sees this
> session).  At that point I'd like to be able to have a
> notification on
> the client or server that lets me add application specific loging to
> stop further polling where that is appropriate.
>
> I see the pollStatusHandler is used when 1) DWR goes offline 2) DWR
> comes back online 3) maxRetries has been reached.
>
> Maybe it also needs a 'server has changed' notification.  
> Sent when app
> server is restarted, when load balancer sends the request to
> a different
> host, or any other situation where the server receives a poll for
> something it knows nothing about.
>
> Issue 2) Client is polling.  Server is killed for a bit and restarts.
> Client polls more times/more frequently.  e.g. seeing one IP
> sending 80
> poll requests a second.
>
> Regards, Peter
>
> On 3/03/16 3:45 PM, David Marginian wrote:
> > It sounds like that could be related to:
> > https://directwebremoting.atlassian.net/browse/DWR-654
> >
> > Doesn't the process on the server restart when the server
> restarts and
> > wouldn't you want clients that were getting updates to
> continue to get
> > them after the outage?  This is what the retry logic was
> designed for.
> > If you simply want all polling to stop when an error is encountered,
> > etc. you can simply kill the poll in your
> pollStatusHandler, configure
> > the retry logic to fail fast, etc..
> >
> > On 03/02/2016 02:18 PM, Peter Bryant wrote:
> >> Hi.  I am running 3.0.1 (from December?).
> >>
> >> I used dwr.engine.setMaxRetries(...);
> >>
> >> If the server goes away and doesn't come back before max
> retries, dwr
> >> stops polling.  That's fine.
> >>
> >> If the server comes back before the max retries is hit, the client
> >> keeps polling.  (Actually looks like it multiplies the number or
> >> frequency of requests.  A separate issue?  Anything you
> can check on?)
> >>
> >> Also the process on the server side has gone away.  So it
> won't have
> >> any more updates for the client.  So the client will keep
> polling (at
> >> what appears to be a higher rate).
> >>
> >> Is guess I'm looking for a way for the server side to send out a
> >> request to stop polling for things that shouldn't be
> polling any more.
> >>
> >> - Peter
> >>

Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

Peter Bryant-2
In reply to this post by david@butterdev.com
I checked out svn and that appears to have resolve 2) .  Thank you!

That alleviates much of the pain of 1).

I have some client side code (using a javascript interval) i can use to
check if I should be disabling reverse ajax.  Plus an on-demand thing on
the server side that pushes out disable reverse ajax script to 'unknown'
script sessions.  So that will do as a workaround.

I see I can intercept sessionCreated(ScriptSessionEvent).  If there was
a bit more information on the script session then I may be able to add
some app logic there (like stopping reverse ajax).

3) e.g. could you add a getBatchId to ScriptSession?  That would let me
see if this new session is a continuation of something or not.

4) Also would it make sense to have something that indicated the
scriptsession was polling or not?  isReverseAjax()?

5) Also how about the ability on the client to set a 'client' attribute
on the script session?  e.g. add a dwr.engine._clientId.  Let the client
set that.  Include that in polls.  expose ScriptSession.getClientID().
e.g. my application logic could then be start long running task, get
task id, send that to the client, have that set in dwr.engine._clientId,
if I then get a scriptsession created event where I don't know about
that id and where reverse ajax is enabled I can send a disable reverse
ajax response.

6) Finally I think there is a bug when adding listeners:

I added to the dwr servlet config xml:

                <init-param>
                <param-name>org.directwebremoting.event.ScriptSessionListener</param-name>
                <param-value>$myclassname</param-value>
                </init-param>


DefaultContainer addParameter adds an instance of the listener when the
value is a string that is a class name.

StartupUtils resolveListenerImplementations  calls
container.getParameter returns value.toString (which will by default be
$classname@$hexadecimal) which leads to class not found exception.  You
should probably call getBean instead of getparameter in
resolveListenerImplementations.

The workaround is to implement toString on the listener as class.getName().

- Peter

On 4/03/16 1:42 AM, [hidden email] wrote:

> As I said earlier, I would bet 2 is related to
> https://directwebremoting.atlassian.net/browse/DWR-654.  We did not have
> a timeout set on the heartbeat calls.  As a result while your server is
> down several of them are backing up and multiple reverse ajax
> connections are being initiated when the server comes back up.
>
> Let me think about this some more.  For now, is it possible that you
> could you patch this temporarily by checking the number of empty
> responses you get.  For example under normal circumstances I expect you
> are getting status updates from the server. After the server goes down
> and comes back up you are getting no status information.  Can you kill
> the poll after N number of no status updates?
>
> On 2016-03-02 20:42, Peter Bryant wrote:
>> The heartbeats are not my issue.
>>
>> The reverse ajax stuff is so that my server can send clients status
>> updates on long running tasks.  After the outage those tasks no longer
>> exist.  So there will be no updates.  The clients keep polling.  My
>> server will never have anything interesting to say to them.
>>
>> I'm not able to easily see how I can ask them to stop.
>>
>> I've tried iterating over all the scriptsessions per
>> ServerContextFactory.get().getAllScriptSessions() and sending a
>> dwr.engine.setActiveReverseAjax(false)
>>
>> Issues:
>>
>> 1) I'd like a hook on the client and/or server side that let's me stop
>> further polling when the server gets a poll for something it doesn't
>> know about.
>>
>> The server side gets a POST .../call/plainpoll/ReverseAjax.dwr it
>> sends a batchid (serial number), page, and scriptSessionId.  After a
>> restart, the server will see it has no such scriptsession id.  (It
>> also knows from the batchid that this should not be the first time it
>> sees this session).  At that point I'd like to be able to have a
>> notification on the client or server that lets me add application
>> specific loging to stop further polling where that is appropriate.
>>
>> I see the pollStatusHandler is used when 1) DWR goes offline 2) DWR
>> comes back online 3) maxRetries has been reached.
>>
>> Maybe it also needs a 'server has changed' notification.  Sent when
>> app server is restarted, when load balancer sends the request to a
>> different host, or any other situation where the server receives a
>> poll for something it knows nothing about.
>>
>> Issue 2) Client is polling.  Server is killed for a bit and restarts.
>> Client polls more times/more frequently.  e.g. seeing one IP sending
>> 80 poll requests a second.
>>
>> Regards, Peter
>>
>> On 3/03/16 3:45 PM, David Marginian wrote:
>>> It sounds like that could be related to:
>>> https://directwebremoting.atlassian.net/browse/DWR-654
>>>
>>> Doesn't the process on the server restart when the server restarts and
>>> wouldn't you want clients that were getting updates to continue to get
>>> them after the outage?  This is what the retry logic was designed for.
>>> If you simply want all polling to stop when an error is encountered,
>>> etc. you can simply kill the poll in your pollStatusHandler, configure
>>> the retry logic to fail fast, etc..
>>>
>>> On 03/02/2016 02:18 PM, Peter Bryant wrote:
>>>> Hi.  I am running 3.0.1 (from December?).
>>>>
>>>> I used dwr.engine.setMaxRetries(...);
>>>>
>>>> If the server goes away and doesn't come back before max retries, dwr
>>>> stops polling.  That's fine.
>>>>
>>>> If the server comes back before the max retries is hit, the client
>>>> keeps polling.  (Actually looks like it multiplies the number or
>>>> frequency of requests.  A separate issue?  Anything you can check on?)
>>>>
>>>> Also the process on the server side has gone away.  So it won't have
>>>> any more updates for the client.  So the client will keep polling (at
>>>> what appears to be a higher rate).
>>>>
>>>> Is guess I'm looking for a way for the server side to send out a
>>>> request to stop polling for things that shouldn't be polling any more.
>>>>
>>>> - Peter
>>>>
Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

david@butterdev.com
In reply to this post by Mike Wilson
Peter, if you want to check the fix for #2 you can grab the latest jar
on bamboo -
http://ci.directwebremoting.org/bamboo/browse/DWRTRUNK-ALL-657/artifact,
or Sonatype's OSS repo - http://directwebremoting.org/dwr/downloads/.

On 03/03/2016 04:54 PM, Mike Wilson wrote:

> I've done similar stuff as you, using reverse ajax to deliver results of
> long-running operations. My solution used client-side JavaScript to switch
> setActiveReverseAjax() on and off to only have it running while there were
> outstanding replies. I kept a data structure on the client-side with every
> ongoing operation and its callbacks so I could notify on completion. I also
> used this data to fail the operations after a longish timeout, which would
> allow reverse ajax polls to be switched off.
>
> I think your solution could benefit from this model, ie letting the client
> keep track of open calls and possibly having it "sync up" with the server at
> times if you want to update things across server restarts. I think the stuff
> you are doing should be considered application-level so your application
> code needs to be aware of it either way (more logic than what the DWR comm
> channel will take care of).
>
> Best regards
> Mike Wilson
>
> Peter Bryant wrote:
>> The heartbeats are not my issue.
>>
>> The reverse ajax stuff is so that my server can send clients status
>> updates on long running tasks.  After the outage those tasks
>> no longer
>> exist.  So there will be no updates.  The clients keep polling.  My
>> server will never have anything interesting to say to them.
>>
>> I'm not able to easily see how I can ask them to stop.
>>
>> I've tried iterating over all the scriptsessions per
>> ServerContextFactory.get().getAllScriptSessions() and sending a
>> dwr.engine.setActiveReverseAjax(false)
>>
>> Issues:
>>
>> 1) I'd like a hook on the client and/or server side that
>> let's me stop
>> further polling when the server gets a poll for something it doesn't
>> know about.
>>
>> The server side gets a POST
>> .../call/plainpoll/ReverseAjax.dwr it sends
>> a batchid (serial number), page, and scriptSessionId.  After
>> a restart,
>> the server will see it has no such scriptsession id.  (It also knows
>> from the batchid that this should not be the first time it sees this
>> session).  At that point I'd like to be able to have a
>> notification on
>> the client or server that lets me add application specific loging to
>> stop further polling where that is appropriate.
>>
>> I see the pollStatusHandler is used when 1) DWR goes offline 2) DWR
>> comes back online 3) maxRetries has been reached.
>>
>> Maybe it also needs a 'server has changed' notification.
>> Sent when app
>> server is restarted, when load balancer sends the request to
>> a different
>> host, or any other situation where the server receives a poll for
>> something it knows nothing about.
>>
>> Issue 2) Client is polling.  Server is killed for a bit and restarts.
>> Client polls more times/more frequently.  e.g. seeing one IP
>> sending 80
>> poll requests a second.
>>
>> Regards, Peter
>>
>> On 3/03/16 3:45 PM, David Marginian wrote:
>>> It sounds like that could be related to:
>>> https://directwebremoting.atlassian.net/browse/DWR-654
>>>
>>> Doesn't the process on the server restart when the server
>> restarts and
>>> wouldn't you want clients that were getting updates to
>> continue to get
>>> them after the outage?  This is what the retry logic was
>> designed for.
>>> If you simply want all polling to stop when an error is encountered,
>>> etc. you can simply kill the poll in your
>> pollStatusHandler, configure
>>> the retry logic to fail fast, etc..
>>>
>>> On 03/02/2016 02:18 PM, Peter Bryant wrote:
>>>> Hi.  I am running 3.0.1 (from December?).
>>>>
>>>> I used dwr.engine.setMaxRetries(...);
>>>>
>>>> If the server goes away and doesn't come back before max
>> retries, dwr
>>>> stops polling.  That's fine.
>>>>
>>>> If the server comes back before the max retries is hit, the client
>>>> keeps polling.  (Actually looks like it multiplies the number or
>>>> frequency of requests.  A separate issue?  Anything you
>> can check on?)
>>>> Also the process on the server side has gone away.  So it
>> won't have
>>>> any more updates for the client.  So the client will keep
>> polling (at
>>>> what appears to be a higher rate).
>>>>
>>>> Is guess I'm looking for a way for the server side to send out a
>>>> request to stop polling for things that shouldn't be
>> polling any more.
>>>> - Peter
>>>>
>

Reply | Threaded
Open this post in threaded view
|

Re: DWR polling after server dies

Mike Wilson
Administrator
In reply to this post by Peter Bryant-2
> an on-demand thing on
> the server side that pushes out disable reverse ajax script
> to 'unknown' script sessions.

This seems a bit superfluous. Reverse ajax will only be activated if your
client-side code activated it in the first place, so it should be enough
that you keep track of this on the client.

> I see I can intercept sessionCreated(ScriptSessionEvent).  If
> there was
> a bit more information on the script session then I may be
> able to add
> some app logic there (like stopping reverse ajax).
>
> 3) e.g. could you add a getBatchId to ScriptSession?  That
> would let me
> see if this new session is a continuation of something or not.
>
> 4) Also would it make sense to have something that indicated the
> scriptsession was polling or not?  isReverseAjax()?

I think you're heading down the wrong alley here. A ScriptSession only
represents a one-to-one mapping to a loaded page in the client and the aim
is not to make this the main player in controlling reverse ajax. It is up to
the client-side to decide when to turn on or off reverse ajax and it would
take extra network requests to keep the server-side (ScriptSession) updated
with the client's intent. Remember that reverse ajax could be running in
polling mode with f ex a 10 second interval so there would be no network
connection between client and server for 10 seconds but we would still
consider reverse ajax enabled (if the client doesn't switch it off during
this time of course).
If you want a server-side control layer you can easily implement this
yourself on top of the DWR reverse ajax channel. I don't see that we would
add this to the DWR 3.x code as it would be either incorrect or inefficient
in too many scenarios. It may still be a good fit for your scenario as you
seem determined to have a server-focused model for reverse ajax (which we
don't really recommend) so by all means go ahead and implement your ideas.

> 5) Also how about the ability on the client to set a 'client'
> attribute
> on the script session?  e.g. add a dwr.engine._clientId.  Let
> the client
> set that.  Include that in polls.  expose
> ScriptSession.getClientID().

There are already a number of generic mechanisms that I think you could use
to solve your problem without adding new features to DWR:

ScriptSession.getId()
  this is a unique id generated by DWR that represents
  the ScriptSession
  (could be used instead of clientId in your scenario?)

ScriptSession.getHttpSessionId()
HttpSession.getId()
  this is the unique id of the associated servlet
  session (JSESSIONID)
  (could be used instead of clientId in your scenario?)

ScriptSession.setAttribute(name, value)
ScriptSession.getAttribute(name)
  store your own data on the ScriptSession
  (could put any of your own ids here)

WebContextFactory.get().getScriptSession()
WebContextFactory.get().getSession()
  gives you access to various objects from a DWR
  servlet thread

dwr.engine.setAttributes(map)
  sets attributes to transfer with every DWR remote call
  and will appear on HttpServletRequest.getAttribute(...)

> 6) Finally I think there is a bug when adding listeners:

Thanks for the report! I've added
https://directwebremoting.atlassian.net/browse/DWR-655
that we'll fix shortly.
A workaround for the moment would be to add a comma (,) to the end of your
class name (this will switch the field to multi-value and other code will be
triggered).

Best regards
Mike

Peter Bryant wrote:

> I checked out svn and that appears to have resolve 2) .  Thank you!
>
> That alleviates much of the pain of 1).
>
> I have some client side code (using a javascript interval) i
> can use to
> check if I should be disabling reverse ajax.  Plus an
> on-demand thing on
> the server side that pushes out disable reverse ajax script
> to 'unknown'
> script sessions.  So that will do as a workaround.
>
> I see I can intercept sessionCreated(ScriptSessionEvent).  If
> there was
> a bit more information on the script session then I may be
> able to add
> some app logic there (like stopping reverse ajax).
>
> 3) e.g. could you add a getBatchId to ScriptSession?  That
> would let me
> see if this new session is a continuation of something or not.
>
> 4) Also would it make sense to have something that indicated the
> scriptsession was polling or not?  isReverseAjax()?
>
> 5) Also how about the ability on the client to set a 'client'
> attribute
> on the script session?  e.g. add a dwr.engine._clientId.  Let
> the client
> set that.  Include that in polls.  expose
> ScriptSession.getClientID().
> e.g. my application logic could then be start long running task, get
> task id, send that to the client, have that set in
> dwr.engine._clientId,
> if I then get a scriptsession created event where I don't know about
> that id and where reverse ajax is enabled I can send a
> disable reverse
> ajax response.
>
> 6) Finally I think there is a bug when adding listeners:
>
> I added to the dwr servlet config xml:
>
> <init-param>
>
> <param-name>org.directwebremoting.event.ScriptSessionListener<
> /param-name>
> <param-value>$myclassname</param-value>
> </init-param>
>
>
> DefaultContainer addParameter adds an instance of the
> listener when the
> value is a string that is a class name.
>
> StartupUtils resolveListenerImplementations  calls
> container.getParameter returns value.toString (which will by
> default be
> $classname@$hexadecimal) which leads to class not found
> exception.  You
> should probably call getBean instead of getparameter in
> resolveListenerImplementations.
>
> The workaround is to implement toString on the listener as
> class.getName().
>
> - Peter
>
> On 4/03/16 1:42 AM, [hidden email] wrote:
> > As I said earlier, I would bet 2 is related to
> > https://directwebremoting.atlassian.net/browse/DWR-654.  We
> did not have
> > a timeout set on the heartbeat calls.  As a result while
> your server is
> > down several of them are backing up and multiple reverse ajax
> > connections are being initiated when the server comes back up.
> >
> > Let me think about this some more.  For now, is it possible that you
> > could you patch this temporarily by checking the number of empty
> > responses you get.  For example under normal circumstances
> I expect you
> > are getting status updates from the server. After the
> server goes down
> > and comes back up you are getting no status information.  
> Can you kill
> > the poll after N number of no status updates?
> >
> > On 2016-03-02 20:42, Peter Bryant wrote:
> >> The heartbeats are not my issue.
> >>
> >> The reverse ajax stuff is so that my server can send clients status
> >> updates on long running tasks.  After the outage those
> tasks no longer
> >> exist.  So there will be no updates.  The clients keep polling.  My
> >> server will never have anything interesting to say to them.
> >>
> >> I'm not able to easily see how I can ask them to stop.
> >>
> >> I've tried iterating over all the scriptsessions per
> >> ServerContextFactory.get().getAllScriptSessions() and sending a
> >> dwr.engine.setActiveReverseAjax(false)
> >>
> >> Issues:
> >>
> >> 1) I'd like a hook on the client and/or server side that
> let's me stop
> >> further polling when the server gets a poll for something
> it doesn't
> >> know about.
> >>
> >> The server side gets a POST .../call/plainpoll/ReverseAjax.dwr it
> >> sends a batchid (serial number), page, and
> scriptSessionId.  After a
> >> restart, the server will see it has no such scriptsession id.  (It
> >> also knows from the batchid that this should not be the
> first time it
> >> sees this session).  At that point I'd like to be able to have a
> >> notification on the client or server that lets me add application
> >> specific loging to stop further polling where that is appropriate.
> >>
> >> I see the pollStatusHandler is used when 1) DWR goes offline 2) DWR
> >> comes back online 3) maxRetries has been reached.
> >>
> >> Maybe it also needs a 'server has changed' notification.  Sent when
> >> app server is restarted, when load balancer sends the request to a
> >> different host, or any other situation where the server receives a
> >> poll for something it knows nothing about.
> >>
> >> Issue 2) Client is polling.  Server is killed for a bit
> and restarts.
> >> Client polls more times/more frequently.  e.g. seeing one
> IP sending
> >> 80 poll requests a second.
> >>
> >> Regards, Peter
> >>
> >> On 3/03/16 3:45 PM, David Marginian wrote:
> >>> It sounds like that could be related to:
> >>> https://directwebremoting.atlassian.net/browse/DWR-654
> >>>
> >>> Doesn't the process on the server restart when the server
> restarts and
> >>> wouldn't you want clients that were getting updates to
> continue to get
> >>> them after the outage?  This is what the retry logic was
> designed for.
> >>> If you simply want all polling to stop when an error is
> encountered,
> >>> etc. you can simply kill the poll in your
> pollStatusHandler, configure
> >>> the retry logic to fail fast, etc..
> >>>
> >>> On 03/02/2016 02:18 PM, Peter Bryant wrote:
> >>>> Hi.  I am running 3.0.1 (from December?).
> >>>>
> >>>> I used dwr.engine.setMaxRetries(...);
> >>>>
> >>>> If the server goes away and doesn't come back before max
> retries, dwr
> >>>> stops polling.  That's fine.
> >>>>
> >>>> If the server comes back before the max retries is hit,
> the client
> >>>> keeps polling.  (Actually looks like it multiplies the number or
> >>>> frequency of requests.  A separate issue?  Anything you
> can check on?)
> >>>>
> >>>> Also the process on the server side has gone away.  So
> it won't have
> >>>> any more updates for the client.  So the client will
> keep polling (at
> >>>> what appears to be a higher rate).
> >>>>
> >>>> Is guess I'm looking for a way for the server side to send out a
> >>>> request to stop polling for things that shouldn't be
> polling any more.
> >>>>
> >>>> - Peter
> >>>>