>>Ah so today we're here to talk about 411 a framework for managing security alerts ah
which we will be open sourcing after Defcon [cheering] so before we get started let's do
introductions. My name is Kai, oh Kai Zhong and I am a product security engineer at Etsy so I'm
responsible for um helping developers with running secure code and maintaining some of the
um internal applications that we use on the security team like 411 and on occasion I've been
known to wear many hats like you see in that photo and uh after this presentation um I'll be
tweeting out links to the slides on my twitter so follow me please gotta get those followers
alright oh heh sorry I'm supposed to make a really really bad pun here um hopefully you
won't find our presentation to be unbearable yes you groaned >>Thanks Kai my name's Ken Lee
I'm a senior product security engineer at Etsy I'm glad to be back at Defcon I was here three
years ago for a presentation on content security policy and two important facts about me, one my
twitter handle is KennySan and two I really love funny cat gifs so I've managed to sneak one
into the slide deck >>Nice! >>For those that don't know this adorable cat is Maru so let me
go and start by explaining what Etsy is, Etsy is a marketplace for handmade and vintage goods
the security team at Etsy is responsible for keeping private member's personal information
such as credit card details, their addresses, etcetera oh in addition the Etsy security team
has been successfully running our own bug bounty program for the past four years as well
[applause] I'm going to go into some more detail about what we're covering in today's
presentation. First we're going to start by talking a little bit about the history of our
transition to using ELK we're going to go delve into some of the problems that we encountered
during this transition process and we're going to talk more about our solution which we call
411 then we're going to dive into a how we at Etsy do alert management using 411 we're going
to show you some additional more involved examples and we're going to finish things off with
a non live demo I know I really wanted the live demo but I I never trust the demo gods to get
it right um first we're going to go over some terminology for some of you this must be old
news but we're going to try to get over this as quickly as possible. So for those that
don't know this is a log file logs are typically interesting messages generated by web server
that's stored in a log file this is the ELK stack the ELK stack is consisting of three different
technologies, Elasticsearch, Logstash, and Kibana and I'm going to quickly go over what
each of these different applications do. The first as represented by our friendly
mustachioed log over here is called Logstash. Logstash is our data processor and log shipper
tool, we primarily use it as a way to identify interesting fields that we would want to
perform searches on in the future. In addition we also use Logstash to ship logs into
Elasticsearch proper, what is Elastic Search? Great question me! Elasticsearch is the
distributed real time search engine created by Elastic dot CO. It allows for storing
complex nested documents but in this case we primarily use Elasticsearch for storing log
files parsed by Logstash in addition Elasticsearch allows the generation of statistics of
your data so you can run interesting aggregations over the information that you have
stored in Elasticsearch which lends itself very well to analysis of the data that you
have. Finally the la- the K in ELK stands for Kibana and that's the data visualization web
application front end for Elasticsearch. Kibana allows for log discover and more
importantly debugging of problems in your application and in addition Kibana provides for
some interesting visualization options. Unfortunately this was the best stock image that I
could find of Kibana to show you what it does um you can do interesting pie charts, graphs,
eccetera, using Kibana as a front end. So now let's talk a little bit more about the
history of how we transitioned into using ELK so Etsy switched to using the ELK stack back in
mid 2014 from Splunk and the work took about a year and throughout this process we both
learned a lot of good lessons from the migration process and we got a bunch of great tools
out of it including 411 but it wasn't a super easy rode to go down we were aware of the fact
that we were going to run into issues when we started to transition to using ELK and we
had to deal with our fair share of really annoying performance impacting bugs with our ELK
cluster. In addition the security team was concerned about the usability of the ELK
as a solution for being able to do some of our alerting and monitoring. So to give an
example of one of these bugs here we have two Anitech articles, ones from September of
2014 and the other from April of 2015 that's a span of about six or so months basically this
article illustrates the discovery of uh a bug with Samsung's line of solid state
hard drives and the fix acknowledge is coming out about six months plus later so
unfortunately for us our ELK our ELK cluster used these SSDs to power the um ELK cluster and so
we were affected by this reperformance bug for more than six months in addition this is
just a small snippet from an email we had a small issue with a kernel level bug affecting how
it was handling NSF mounds this caused a lot of instability with our ELK cluster and
unfortunately some additional outage uh downtime as well. So to say the least you know these
are just two example bugs that we had to encounter at times it felt like we were riding the
struggle bus with regards to all of the bugs and issues that we had to deal with with regards to
ELK but that aside, Kai is now going to talk to you about um some of the actual problems, not
just bugs that we encountered, when migrating to Elk >>Thank you Ken, so um like most
security organizations alerting is a major part of how the security team at Etsy knows what
is going on on the site um and some mechanisms that we use for alerting are um Splunk, or used
to use our Splunk, StatsD and Graphite and unfortunately um when we first started this
migration um there we were making use of Splunk safe searches to automatically
schedule queries on some sort of periodic interval and Elasticsearch didn't offer like
equivalent functionality at that time and additionally, Elasticsearch also didn't offer
some sort of web UI for managing those um queries that we were writing which is pretty useful
when say it's like the middle of the weekend and you're getting spammed with alerts and you need
to make a change to one of the queries but doing so would require a could push and you
don't want to like break something with some sort of web UI where everything is handled
for you you could just go in there, change the query and then update it and you're good to go.
Now the second problem was that um we were just not familiar with the new query language that
we were um faced with um our old queries were built using SPL which is the language that
Splunk uses and um so the some of the functionality that we needed in order to write our
queries simply wasn't available um in Elasticsearch's Lucene shorthand. Additionally there
were some things that weren't obvious coming from um Splunk like especially with how
Elasticsearch indexes documents um it has an affect on like whether or not and how you can
query um the actual fields that you are searching on. So this came as a surprise to us at
certain points and because of these issues the road to ELK integration was a long one in
order to successfully um complete the migration we essentially needed three things,
firstly we needed a query language that would allow us to build complex queries preferably
without having to write any code, uh we also needed a mechanism to actually run these
queries and like email us with those results and finally we would like to have all of this
ready before we turned off Splunk because then we're then dark otherwise and that would be
really bad. Alright so as it turns out the first half of the solution was provided to us by
um the data engineering team at Etsy and that solution is called ESQuery and what it is is it's a
superset of the standard Lucene shorthand and um it's intactictly pretty similar to
SPL so it's got like a bunch of pipelines everywhere that you can then like take data from the
first one and transfer it to the second one. I'll provide an example in in a bit but more
importantly it supports all of the functionality that we need. So here's a quick summary of all
of the syntax um when you define a um Elasticsearch query you do it via this large json DSL and
we provided the ability to like in line all of these directly into the query so you can see it
over here you can specify say like size or how you're sorting the results that come back or
just what fields are coming back. Additionally you can do an emulated join so you can results
from one query and then like insert them into a subsequent query and all the irrigation
functionality that is available in Elasticsearch is also available in ESQuery but in
line. And finally you can also um define variables within ESQuery um and you configure
them in 411 and then have those queries get substituted into uh sorry those variables get
substituted into your queries at one time so like you can have a list of values that you can
update independently of these queries so here's an example SPL query. Um what this is doing is
it's finding all um failed login attempts and then giving you the top ten IP addresses that made
attempts this is the same query but um when using uh Lost Searches um DSL and finally this
is the same query but when using ESQuery so you can see it's pretty similar to how you would
write it using SPL and way shorter as well and the two are actually similar enough that um
someone at Etsy was able to write a simple query translator which we made use of during our
migration so what we did was we would just plug it in, um test it out, and make changes if
necessary and then stick them into 411. Speaking of which next up let's talk about what 411 is
so 411 is an alert management interface or application and what it does is it allows you to
write queries that get automatically executed on some sort of schedule then you can
configure it to email you with like email you alerts whenever those data sources that you're
querying return any results and additionally you can manage the alerts that our generated
through the web interface. Before we dive into 411 let's um talk briefly about how
scheduling works within um the system. So whenever a search job is run it executes um a query
against a data source and then generates a a an alert for every single result that comes back
you can then configure a series of filters on those alerts to re- like reduce or modify the
stream somehow and then finally um specify a list of targets that you can send uh the
remaining alerts to. So an example of one target that is pretty neat is the Jira target
which allows you to like generate a a ticket for every single alert that goes through
the pipeline. Alright wait oh sorry additionally if we um take a step back what happens is
there's a scheduler that runs periodically and generates those search jobs which then get fed
off to a bunch of workers that actually execute them. And now we're ready to get into 411. So
the first thing you'll see when you log on is the dashboard which is this thing over here
it's pretty simple but you see there's some um userful information about the current
status of 411 there's a breakdown of alerts that are currently active as well as a
histogram of just like alerts that have come in over the last few days. Alright moving on um
one of the most important things you'll want to do in 411 is manage the queries that you are
like schedule to execute and you do that via the search management page which you can
see here the center you've got all the searches listed out with like some categorization
information and on the right you'll you can see the health of that particular search, whether
or not it's been running correctly, and whether or not it's been able to execute. Now
if you want to modify an individual search you'll get taken to this page over here
which has a whole like slew of options that you can configure um there's a title which is not
too exciting but more importantly there are all of these fields so let's go through
all of these briefly. At the top here is the query which is quite simply the query that you're
sending off to whatever data source in this case this is a Logstash source so we're sending
this to an ElasticSearch cluster with a Logstash index um you can also configure we can also
configure a results type so whether or not you want the actual contents of the log
inside um match the query or whether you just want like a simple count or even an
indication that there's like no results and finally you can filter you can apply thresholds
on like how many results that you want to get back next up you can you can also provide a
description that um gets included whenever an alert gets sent to you so you should
preferably put some information that allows you allows whoevers um assigned to the alert to
resolve it and there are a few categorization options at the bottom as well for the alerts
that are generated much better alright next up is the frequency which is how often you want to
run this search and the time range which is how how far back of a like time window you want
to search most of the time you're gonna want both of these to be the same value but if you
want say like better granularity you might want to specify a frequency of one minute and a
time range of ten minutes and finally we've got the status bun which lets you toggle this
search. Cool that's all for the basic tab next up let's talk about uh notifications. So in
411 you can configure uh you can configure email notifications whenever um it generates any
alerts and those notifications can be sent out as soon as the alerts are generated or included
in a hourly or daily roll out. You can also assign you also have to assign um these alerts
to an assignee which is the person or the group of people that are responsible for
actually resolving and taking a look at those alerts and finally the owner field is just um for
bookkeeping so you can keep track of who is responsible for maintaining that particular
search. And here's the AppSec group that we're currently using here you see it's got a list of
all the users that are currently on the security AppSec team and uh whenever 411 generates an
alert for this particular um search they'll email all of these people. Alright moving on
to the final tab the here we've got some more advanced functionality that's less
commonly used like auto close which allows you to automatically close alerts that
haven't seen any activity after a while so they're probably stale and we've also got um the
actual configuration for filters and targets here as well so again recall that filters
allowed you to reduce the list of alerts that get passed through um 411 and eventually
get generated and here is a list of filter that are currently available so I'll just highlight
a few of them. Dedupe allows you to just like dedupe alerts that are the same and you can
throttle um the alerts that are generated to like some threshold for the purposes of this
presentation let's talk about the regular expression one because that's relatively
complicated uh you can configure this particular filter to um have some sort of key like what
keys you want to match on within the alert as well as a regular expression to match on and then
you can specify whether or not you want matching alerts to be included or excluded from the
like final list of alerts. Similarly on the other side we've got the list of targets
that you can configure and we're going to cover the Jira target which allows you to specify a
Jira instance and a a project a type and a and a assignee and then any alerts that make it to
this target get turned into Jira tickets so that's useful if you want to use Jira as your alert
management workflow cool so that's about it as far as managing searches go next up
we're going to get into actually managing the alerts that are generated by 411. So here it is
the main alert management interface you'll notice at the top there's a search bar for
filtering the list of alerts that are visible and this 411 actually indexes all of its
alerts into Elasticsearch so all of your standard like Lucene or hand queries are valid here um
in the center you'll see all of the actual alerts that matched the current filter and you can
select um individual alerts and apply actions to them using the search um action bar at the
bottom. Now if you want to drill down into a individual alert you can so this is the view for
viewing just like a single alert and you can see at the center there's all of the information
that was available before but also a change log for viewing all actions that have been taken
on this one's alert. Additionally you'll see there is the same action bar that's
available at the bottom and let's say thank you let's say we were to investigate investigate
this alert like we took a look at IP address and then we've determined that it's just a
scanner so nothing to worry about we can then hit resolve on that action bar which will pop
up this little dialog where we can select a resolution status in this case not an issue and a
description of exactly what actions we took to resolve this alert and then once you hit
resolve there you'll see the change log has been updated with this um additional action. 411
also offers a um alert feed so what you can do is just keep this open and whatever new
alerts come in um it'll just hop up on this list and you can also leave it running in the
background because it's got desktop notifications so you'll see that nice little chrome pop
up uh whenever there are new alerts cool alright next up >>Thanks Kai I'm gonna talk to
us talk to you more about how we do alert management at Etsy using 411. So here we have a
sample email generated by 411 I'm going to go into some more depth and explain to you what's
going on so the subject line of this email says login service five hundreds ah the description
says login five hundreds investigate for people that aren't very familiar with it log
in is just basically a process to essentially log you into a website, five hundreds is
basically a a message that says oh something bad is happening and usually this is pretty bad
to the extent where you would want to create an alert for it and be notified about it and we
can see from the time range that this alert is taken place over the past five minutes and we
have buttons on the bottom to both view the alert in 411 as well as to be able to view this
link in Kibana as well we also get a short snippet including the PHP error that was thrown
and as you can see from this sort of short email snippet people are sort of taking action
based on this alert. But let's take a step back a little bit and think more about what we do
to actually crea- create high quality alerts and at Etsy the secret is we create alerts that
have a high degree of sensitivity. What do I mean when I say high sensitivity well
let's say that we have an alert that fires one hundred times over the course of a day and out
of those hundred times that alert correctly predicts an event actually happening ninety
times so what that means is out of a hundred times that alert only improperly fires ten times
so there's a one in ten chance that that alert is misfiring so ninety percent of the time that
alert is responding correctly to an event so we say that that that particular has a
sensitivity of ninety percent that's a pretty high sensitivity that we would you know find to
be useful for alerts that aren't as important we still create them as searches and alerts in
411 but what we do is we end up not generating email notifications out of them and
I'll go into more detail as to why in just a moment for more important alerts we still
generate alerts off of them but what we do is we set them up as um rollups so every hour or
every day we have this alert go off and it'll email us the results and one reason why we
really like doing this is because it gives us the option of being able to monitor a
particular search over a period of time for anomalies. So one of the reasons why we take this
sort of tiered approach to alerting is because attackers hitting your website will often
generate a lot of noise and in the process of doing so they'll set off a bunch of different
alerts that you have set up. So one thing that we often have to answer when we see an alert on
our phone at three in the morning is is this something that I really need to respond to
at three in the morning? Do I Can I Can I just continue sleeping? Do I have to you know
can I just answer this tomorrow or even after the weekend? Well one way in which we make that
determination is by seeing and looking at the other alerts that have gone off in the same period
of time so we look at the high alerts the low alerts the medium alerts that have gone off over
this period of time an example uh a good example of this would be let's say there is a very
high number of failed login attempts that an alert a high alert that has gone off recently
well maybe if we also have a lower alert that indicates that we have a low quality uh series
of bots trying to scan us at the same time maybe that's indicative that actually this
isn't like a really concentrated attack that we need to worry about so we can go back to
sleep. So in addition to creating alerts one thing that we also have to be vigilant
about is maintaining our alerts sometimes we create alerts that overfit on a particular attacker
and as a result of that the alerts become less useful over time one way in which this
happens is the alert simply generates too much noise we've sometimes we've created this
search and it turns out we're the IP address for example might be shared by some legitimate
users as well um and that can create a bunch of false positives so in those cases we
sometimes have to finetune our alerts and one way in which we do that is we look at other
fields so another example is sometimes say an attacker might accidentally be using a static
but very easily identifiable user agent when attacking our website one way in which so we
can create a search off of that to easily identify that attacker but perhaps they become a little
savvier and realize that they're making this terrible mistake in the first place and they make an
att- they make an effort to randomize the user agent and by doing this what they essentially
do is they're forcing us to have to use other fields to identify the attacker may be looking at
what data center it's coming from or IP or other IP addresses that they're coming from for
example so let's take a step back we've sort of sold 411 as a tool for security teams but it's
also a very useful team um a very useful tool for the average developer as well and one way in
which 411 can be useful for a developer is creating alerts based off of potential error
conditions in your code so a good example of this would be when you want to know potential
exception conditions say for example code wrapped in a tri catch statement for example you
generally don't want your application to be running into too many exceptions so generally
by entering in a log line and creating an alert based off that log line you'll get a
notification when something bad happens in your application. Another condition under which
you'd want to create an alert is when you're getting a large amount of unwanted traffic to an
endpoint that you uh consider sensitive. A good example of this would be uh an attack for
example trying to hit a gift card redemption endpoint or a credit card number re- uh
entering endpoint you know those endpoints are probably already rate limited in the first place
so it's only natural to add basically an additional alert on top of that just so you know
that someone's trying to intentionally brute force this particular endpoint and finally
the last instance under which you might want to consider creating an alert is when you're
deprecating old code. So at Etsy we have what's called a feature flag system that allows us to
very easily flag on and off particular bits of code but sometimes we need to evaluate
how often a particular code branch is being exercised before we can move it entirely from the
code base one way in which we do that is we sometimes just like to add a log line and create an
alert just to I with a rollup to see how many times this particular code branch has been
exercised throughout the course of a day or even a week and by doing that once we have
confidence in knowing yes this code is not really being used that often we can go ahead and
actually remove the code in question. So at Etsy we actually have a couple different
instances of 411 set up and I'll explain what they are. Our main instance that the application
security and risk engineering teams used is called Sec411 in this instance it's primarily
used for monitoring issues that happened on Etsy dot com itself. The network security team has
it's own instance of 411 called appropriately netsec411 and this instance is set up primarily to
aid in monitoring laptops and our servers and finally for those compliance loving folks we
have an instance of 411 setup called Sox411 which is primarily uh used for sox related
compliance issues. Now I'm going to go into some more examples of uh some functionality that we
have present in 411 that we're going to be making available to you when we open source the tool
a lot of this additional functionality was made av- was made at the request of
developers at Etsy and we found it useful enough to include in the open source version of 411
as well. So Kai mentioned earlier that 411 has the ability to incorporate lists into
queries here we have a search functionality that looks for suspicious duo activity coming
from known TOR exit nodes so this query looks fairly straightforward but let's take a
look let's take a deeper look so we're looking at logs of the type duo login and we're looking
for the IP address that matches this TOR exits variable well if we take a look at what the list
functionality is we can see that TOR exits is defined as a URL that just enumerates a list of
IP addresses so what 411 is actually doing behind the scenes is it's taking this TOR exits
node variable and expanding the query out to include all of those IP addresses in that TOR
exits node list so essentially when you get when you get any hit in a log line that contains
a TOR exit node IP address it matches with the search and generates an alert. Now I'm
gonna talk more about some of the additional functionality that we offer beyond just the
ELK stack with 411. We offer a searcher for graphite which is basically a way of storing and
viewing time series data this is what graphites front end interface looks like as you can
it's a very nice way of easily generating graphs, this particular graph shows an
overlay of potential cross site scripting over potential scanners um it's just a really
nice way of being able to determine when you are when there are anomalies happening
and so the graphite searcher gives you a really easy way to do simple threshold style
alerting uh and because the graphite searcher basically directly sends the query to
Graphite itself all of graphite's data transform functions are available for you
to be able to use for the searcher so as an example of some of the things you can do
you can essentially write a query to say please fire off an alert when you see a high rate
of change for failed logins. Now I'm gonna talk a little bit about the HTTPS searcher that
we're also making available. This is a fairly straightforward searcher what it does is you
provide an HTTP endpoint and if you receive an unexpected response code it creates an
alert based off of that. It's very useful for web services when you want to know if a
particular service is for example down or even up and for those in the devops community
this is very similar in functionality to the tool called NAGIOS. Now I'm gonna go to the
non live demo portion let's hope this works [laughter] okay I'll be narrating this so for this
demo we set up a very simple wordpress blog instance called Demo All The Things and we have
a we have a plugin installed called WP Audit Log which logs everything that happens in this
wordpress instance. In addition we are forwarding the logs to our own ELK stack so that we can
index the log files um here I'm just showing off this one nice blog post that we have uh red is
apparently the best color. Now we're going into Kibana proper to actually look at some of the
log files from this wordpress instance and we can see here there's an interesting log line
user deactivated a wordpress plugin okay that's kind of interesting maybe we can make an
alert off of that particular phrase that we can use for the future. So what we're going to
go and do now is we're going to go into 411 proper we're going to go into the searches tab
we're going to go and hit the create button and create a new searcher of the Logstash type
and we're basically just going to create a new search to look for this particular message
we're going to call this search disabled wordpress plugin and the query is going to look for
anything in the message field that contains the phrase user deactivated a wordpress plugin
and we're going to provide a little description in the search to let others that use 411 know
what this search is about in case they have to deal with an alert generated by it in the
future. We're going to look back in the past fifteen minutes and we're gonna test this alert and
we can see here that 411 has successfully grabbed data from um from logstash so we're going
to go ahead and create the search and to actually generate a real alert we're going to go
ahead and hit the execute button which will cre- which will not just test the alert it will
actually create a real alert for us in the alert page we can see here we get the same results
back that we just got from hitting the test button so now we're going to go into alerts
we're going to click on view to take a look at our particular the alert that was just
generated and we can see here that in the in the plugin file information we can see that the
duo wordpress plugin was disabled well that's not good so now that we've gotten the
relevant information from this particular plugin we're going to go ahead and remediate this
issue we're gonna go into the wordpress back end we're going to go into the plugins page oh
and what do you know? Duo two factor off plugin the plugin is disabled so we're gonna go ahead
and re enable it and now that we've taken care of that issue we're gonna go ahead and hit
resolve and we're gonna just say that we've taken action to re enable this plugin and we've
taken care of the alert by doing that. That concludes the live demo, not live demo [applause].
>>Cool and that also happens to conclude the presentation as well um once again 411s gonna be
open sourced after uh Defcon and we will take questions now um there's a mic over there and
over there so if you've got a question please line up [movement in the room] >>if
you're leaving you have to leave out these doors in the back >>when deciding to move away
from Splunk um how did you guys scale ELK versus going with Splunk like so ELK has a problem
when it gets really big it gets really expensive so was it a cost decision moving from
Splunk? >>ah the the question was why did we switch from Splunk um it was basically a
decision made by our operations team >>Okay, one last question, what are you guys using as your
send mail function? Are you guys using like mail chimp? Um we've just got everything setup
correctly already so it's whatever um you provide to PHP >>The question was what do we
use to send mail in 411? >>So um yeah I have a question so you're open sourcing 411 after this
talk or that's the first part and the second part is do you have an a is this built on a AWS
architecture such as using a simple email service is it using elasticsearch what is it using
as far as your infrastructure that you can talk about? >>Um we're going to be open sourcing
this after Defcon and as far as Gmail um sorry what was the second question, email right?
>>No is it AWS architecture, so do you have an AWS architecture to go with it? >>Uh no it's just
um whatever email like >>No no no I meant in general the entire because like elasticsearch are
you using like lando functions or is it all pretty much like uh uh internal to itself instances
as far as >>Everything's inside like our data centers >>Okay got it thanks >>questions? >>Hey um
I have a question about the configuration you showed us, the beautiful UOI but how is the
configuration actually stored and uh yes there is a change log on individual pages but would it
be easy to version control the configuration somehow? >>so the question was about change log
and version controlling of alerts uh >>There is no version controlling of alerts but there
is a change log of all of the things that have been taken on the alert so could you also
speak louder because I think the mic isn't that great. >>oh okay So the initial question was how
is the configuration stored? Is it like is it stored in some text format that you can review
is it xml is it, can we version control it? >>All of it's stored in MySql so we're using MySql as
a database. >>Hello Hello Hey uh so at this point you guys are probably definitely aware of
Watcher Allasa Searches own alerting service um what's the motivation between using their
own uh plugin built in straight to the you know cluster? >>So uh at the time when we started um
working on this I don't think Watcher existed yet >>Yeah it's super new >>So that's' why we
ended up writing this >>Right um so is there any point to using it now as opposed to just
running the plugin? I don't want to be like that guy I'm just >>Um I don't know you're kind of
putting me on the spot uh there's also so it's not just elasticsearch like you can also
plug in other data searches into 411 for like querying those data sources >>Okay, thank you. >>Hi
um my questi- I have like two questions one of them is what was your motivation to move away
from Splunk and build your own your motivation to move away from Splunk and build your own
uh >>So that was a decision made by our Sys Ops team. >>Okay >>So I didn't really have any like
much input on that >>Uh but any security concerns they had or? Was it I mean did they have any
security concerns at all or yeah >>I don't think so I think there at one point like uh the
scripting functionality in ELK was enabled by default and there were some like serious security
issues with that so that's as far as I can remember >>Okay and um just one last question um
does ELK also help like you know doing log analysis across multiple servers and senses?
>>Uh >>Or is it like dedicated to just like one group of >>Yeah you can setup multiple instances
and have them like connect to the same database and that would just work >>Oh okay thanks
>>Okay >>Uh are ya'll open sourcing that ESQuery as well? Because Query DSL sucks >>Uh
yeah it's built in >>Oh it's already up >>Huh? >>Oh it's already up? Oh it's built in?
Okay >>MMhmm >>My questions on uh Jira integration in your demo you showed that you resolved uh
the issue with the user turning off the the feature in wordpress does that end up um closing a
jira ticket? >>Um no it doesn't so uh the Jira like target is pretty much separate you just
send that data off to Jira and then like 411 forgets about it >>Okay thank you >>Mmhmm >>K so
my question is a little bit two fold >>Okay >>Uh we saw a lot of web UI about this but uh there
wasn't any real uh uh focus on any API around it so uh like consider the use case where
there might be something where uh the same type of alert happens frequently but self
resolves uh would it have the possibility to either escalate the same type of alert due to
it's frequency or in contrast if it somehow self resolves all of the history of those alerts get
resolved as well? >>Um that's not currently built in but that's because like it hasn't
been asked for yet so um like once this is open sourced you could create an issue and then
we could consider it >>Okay thank you >>Cool, guess that's it [applause]
Không có nhận xét nào:
Đăng nhận xét