The Traffic Filters section in FunnelFlux lets you set basic filters to either:
Hide traffic from your report displays or
Hide from reporting display but also redirect to some URL (i.e. bounce visits)
In each case the visits/hits that match these filters will not show in your reports, unless you add the "Show Filtered Traffic" option in the reporting view -- so the visits are never lost, just hidden by default to keep reporting cleaner.
By default FunnelFlux has a filter to hide known bots and spiders. Things like Google Bot, search engine crawlers etc.
You can also add your own rules based on:
Any filters you add here will be global and in the case of Hide & Redirect, this will take place before users load any funnels -- so any conditions you have inside funnels will be irrelevant, the traffic will not get to these if bounced by one of these filters already.
Let's have a quick look at how to add a filter for each attribute type.
For IP addresses you just need to add one per line.
Here I have chose to Hide & Redirect - this adds an extra box for a URL input. I can put any valid URL, including a FunnelFlux link.
So if you want to be super savvy, you could bounce traffic to a special funnel that redirects users in a certain way. Then in your reporting, you could look at this funnel, toggle show filtered traffic, and analyse your filtered traffic.
Much like with IPs, except you can set ranges.
You should set these ranges in corresponding lines, with the first entry being the start range, the second being the end range, e.g.
Hiding by referrer is best done based on your data.
In other words, get data, group by referrer in your statistics and figure out what referrers are worth hiding from stats.
Here for example I am hiding
l.facebook.com , because from my stats I know this is the referrer that comes from the automatic link checking -- which is basically a bot processing my ad URL, and its not worth seeing in my statistics at all (if I create 100 ads I'd get 100+ useless hits from this referrer).
User-agent is a complex string of text that describes the environment of the device requesting a URL -- a "header" that servers can see.
It is from user-agent that we resolve browser, browser version, device type, OS, etc.
This is another filter you should use mainly based on your own data - i.e. group by user-agents, find ones you want to hide, and then make a filter here.
Here is another Facebook example where I hide user-agents that I have observed to be purely from automated reviews by bots:
If you're interested, that list is below (note: probably outdated after a while):
mozilla/5.0 (x11; linux x86_64) applewebkit/537.36 (khtml, like gecko) chrome/54.0.2840.59 safari/537.36
mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0)
mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0; .net clr 2.0.50727)
This one is pretty self-explanatory.
For ISPs, much like with user-agents and referrers, it is best to look at your data then add specific ISPs to hide.
Note the filter here is using "contains" as an operator, so if you put "amazon" on a line it would match all ISPs with amazon in the name. Use this with caution to ensure you don't hide ISPs that you don't actually mean to.
In general, you can just copy paste ISP names from your reports.
Here is another Facebook example where I am also hiding visitors based on the ISPs, with these being ISPs that also are involved in automated (bot) reviews: