Bwin Forum

Bwin.com

You are not logged in.

#1 2020-09-12 19:36:06

TristanFix
Member
From: Denmark, Kobenhavn K
Registered: 2020-09-12
Posts: 1

Here’s a quote Much has been achieved in the field of AI

Logic.

You are currently browsing the archive for the Logic category

in ,   by   |                                                                         Hey, I just enrolled in .   are programs that try to play almost any game after being given the rules.  There is a yearly competition of general game playing programs at the.  If you join the course, send me an email so that we can exchange ideas or notes.  (my email address is hundal followed by “hh” at yahoo.com)                                                                                                                                                     Markov Logic Networks Tutorial.
January 6, 2013  in ,   by   | Permalink                                                                        Garvesh Raskutti has some nice slides on Probabilistic Graphical Models and Markov Logic Networks (Richardson & Domingos 2006).

Markov Logic Networks encode first order predicate logic into a Markov Random Field

The resulting networks can be quite large because statements like “for all x, y, and z, x is y’s parent and z is x’s parent imply z is y’s grandparent” require the existence of $2n^2$ nodes in the graph where $n$ is the number of people considered.

The resulting networks are frequently solved by using Gibbs Sampling

For even more information Pedro Domingos has put an entire course online at www.cs.washington.edu/homes/pedrod/803                                                                                                                                                    Lifted Inference.
December 24, 2012  in ,   by   | Permalink                                                                        Lifted Inference uses the rules of first order predicate logic to improve the speed of the standard Markov Random Field algorithms applied to Markov Logic Networks.
I wish I had been in Barcelona Spain in July last year for IJCAI11 because they had a cool tutorial on Lifted Inference.
Here’s a quote Much has been achieved in the field of AI, yet much remains to be done if we are to reach the goals we all imagine.
One of the key challenges with moving ahead is closing the gap between logical and statistical AI.
Recent years have seen an explosion of successes in combining probability and (subsets of) first-order logic respectively programming languages and databases in several subfields of AI: Reasoning, Learning, Knowledge Representation, Planning, Databases, NLP, Robotics, Vision, etc.
Nowadays, we can learn probabilistic relational models automatically from millions of inter-related objects.
We can generate optimal plans and learn to act optimally in uncertain environments involving millions of objects and relations among them.
Exploiting shared factors can speed up message-passing algorithms for relational inference but also for classical propositional inference such as solving SAT problems.
We can even perform exact lifted probabilistic inference avoiding explicit state enumeration by manipulating first-order state representations directly.

In the related paper “Lifted Inference Seen from the Other Side : The Tractable Features“

Jha, Gogate, Meliou, Suciu (2010) reverse this notion.
Here’s the abstract: Lifted Inference algorithms for representations that combine first-order logic and graphical models have been the focus of much recent research.
All lifted algorithms developed to date are based on the same underlying idea: take a standard probabilistic inference algorithm (e.g., variable elimination, belief propagation etc.) and improve its efficiency by exploiting repeated structure in the first-order model.
In this paper, we propose an approach from the other side in that we use techniques from logic for probabilistic inference.
In particular, we define a set of rules that look only at the logical representation to identify models for which exact efficient inference is possible.
Our rules yield new tractable classes that could not be solved efficiently by any of the existing techniques.
What happens when you combine Relational Databases, Logic, and Machine Learning.
December 22, 2012  in , ,   by   | Permalink                                                                        Answer:  Statistical Relational Learning.

Maybe I can get the book for Christmas
Euclidian Geometry is Decidable

September 22, 2012  in   by   | 1 comment                                                                        The other day, .

Carl mentioned that Euclidean Geometry was decidable

I thought that was impossible because I thought it would have an isomorphic copy of Peano arithmetic which is not decidable.
Later he pointed me toward Tarski’s axioms.
Here’s a quote from the Wikipedia page “This fact allowed Tarski to prove that Euclidean geometry is decidable: there exists an algorithm which can determine the truth or falsity of any sentence.”  I found the whole article to be pretty cool because I had never really dug into geometry as a first order predicate calculus.
Venn Diagrams.
August 20, 2012  in ,   by   | Permalink                                                                        Carl sent me a link to a Venn Diagrams post, so that got me thinking.

A Venn Diagram with $n$ atoms has to represent $2^n$ regions. For example if $n$ is $2$

then you have the standard Venn Diagram below.
Each time you increase $n$ by one, you double the number of regions.
This makes me think of binary codes and orthogonal functions. Everybody’s favorite orthogonal functions are the trig functions, so you should be able to draw Venn diagrams with wavy trig functions. Here was my first try.
Those seemed kind of busy, so I dampened the amplitude on the high frequencies (making the slopes that same and possibly increasing artistic appeal.)  I really like the last one.
(9).
(3).
(5).
(7).
(8).
(5).
(1).
(1).
(26).
(1).
(12).
(29).
(36).
(15).
(11).
(14).
(6).
(22).
(27).
(26).
(17).
(1).
(8).
(14).
(13).
(5).
(19).
(3).
(6).
(31).
(1).
(1).
(1).
(1).
(1).
(1).
(1).
(2).
(3).
(3).
(1).
(1).
(2).
(2).
(3).
(1).
(2).
(3).
(2).
(1).
(2).
(5).
(1).
(3).
(5).
(4).
(5).
(5).
(4).
(5).
(4).
(2).
(14).
(15).
(14).
(18).
(17).
(19).
(15).
(22).
(26).
(18).
Search this site                                                                                                                                                                      Powered by  and.

Offline

Board footer

Powered by FluxBB